url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
list | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/19178
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19178/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19178/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19178/events
|
https://github.com/huggingface/transformers/pull/19178
| 1,384,481,366
|
PR_kwDOCUB6oc4_iHRx
| 19,178
|
Fix cached lookup filepath on windows for hub
|
{
"login": "kjerk",
"id": 2738686,
"node_id": "MDQ6VXNlcjI3Mzg2ODY=",
"avatar_url": "https://avatars.githubusercontent.com/u/2738686?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kjerk",
"html_url": "https://github.com/kjerk",
"followers_url": "https://api.github.com/users/kjerk/followers",
"following_url": "https://api.github.com/users/kjerk/following{/other_user}",
"gists_url": "https://api.github.com/users/kjerk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kjerk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kjerk/subscriptions",
"organizations_url": "https://api.github.com/users/kjerk/orgs",
"repos_url": "https://api.github.com/users/kjerk/repos",
"events_url": "https://api.github.com/users/kjerk/events{/privacy}",
"received_events_url": "https://api.github.com/users/kjerk/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Re-ran black on hub.py to fix a spacing inconsistency 💀",
"_The documentation is not available anymore as the PR was closed or merged._",
"Mmm, some of the failure on the CI are due to a bug now fixed in the return of `cached_file`. Could you do a quick rebase on the main branch so I can see clearer in the failed tests?",
"Thanks for the rebase, we now have a clearer idea of the failing tests! This seems to break the functionality somehow (basically the commit hash is not found anymore). Maybe the regex need a small adaptation.\r\nI'll dive into this tomorrow!",
"All green, thanks for iterating with us!",
"> All green, thanks for iterating with us!\r\n\r\nNo problem, thanks for all the suggestions :)\r\n\r\nThese cross platform fixes are always comically annoying, fix for Windows, break on *nix, of course!"
] | 1,663
| 1,664
| 1,664
|
CONTRIBUTOR
| null |
# What does this PR do?
Add small safety mechanism to hub.py for cached lookups of model files on Windows systems.
Windows systems resolve files using `C:\\Path\\To\\File.ext` under the hood in Python, this makes the `"snapshots/([^/]+)/"` regex fail to find the commit hash. So this simply adds a safety mechanism in `extract_commit_hash` to unify the resolved file paths (replace \\ with / which works just fine in all windows apis)
## Code at issue
https://github.com/huggingface/transformers/blob/fa4eeb4fd342cdbad50d1eeacdd7d7d7bc23b080/src/transformers/utils/hub.py#L222-L226
## Found while trying
```python
CLIPTokenizer.from_pretrained(version)
```
## Before Change

### Error

## After Change

<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? (None pertinent found)
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
## Tests Run
* tests/utils/*.py

<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19178/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19178/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19178",
"html_url": "https://github.com/huggingface/transformers/pull/19178",
"diff_url": "https://github.com/huggingface/transformers/pull/19178.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19178.patch",
"merged_at": 1664565220000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19177
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19177/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19177/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19177/events
|
https://github.com/huggingface/transformers/issues/19177
| 1,384,481,063
|
I_kwDOCUB6oc5ShYEn
| 19,177
|
ValueError: Task image-classification is not compatible with this dataset! Available tasks: []
|
{
"login": "dxlong2000",
"id": 54766384,
"node_id": "MDQ6VXNlcjU0NzY2Mzg0",
"avatar_url": "https://avatars.githubusercontent.com/u/54766384?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dxlong2000",
"html_url": "https://github.com/dxlong2000",
"followers_url": "https://api.github.com/users/dxlong2000/followers",
"following_url": "https://api.github.com/users/dxlong2000/following{/other_user}",
"gists_url": "https://api.github.com/users/dxlong2000/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dxlong2000/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dxlong2000/subscriptions",
"organizations_url": "https://api.github.com/users/dxlong2000/orgs",
"repos_url": "https://api.github.com/users/dxlong2000/repos",
"events_url": "https://api.github.com/users/dxlong2000/events{/privacy}",
"received_events_url": "https://api.github.com/users/dxlong2000/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
|
{
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
}
] |
[
"I have a similar situation. I had image classification working a couple of weeks ago with a local (disk) dataset, using run_image_classification.py, but now that code no longer works.\r\n\r\nI run the script within a Jupyter Notebook with\r\n```\r\n%run Experimento_5/run_image_classification.py\\\r\n--train_dir {ruta_dataset}\\\r\n--output_dir Experimento_5/modelos/\\\r\n--remove_unused_columns False\\\r\n--do_train\\\r\n--do_eval\r\n```\r\n\r\nAnd after it loads the dataset I get:\r\n```\r\nDataset imagefolder downloaded and prepared to /root/.cache/huggingface/datasets/imagefolder/default-b8c6bbfc7a1635cf/0.0.0/e872d3ec27c6c200a8881a4af52930df7eca3372b19aa4d0f5db74a2fded8141. Subsequent calls will reuse this data.\r\n100%\r\n1/1 [00:00<00:00, 14.38it/s]\r\n---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\nFile /drive/Experimento_5/run_image_classification.py:388, in <module>\r\n 384 trainer.create_model_card(**kwargs)\r\n 387 if __name__ == \"__main__\":\r\n--> 388 main()\r\n\r\nFile /drive/Experimento_5/run_image_classification.py:236, in main()\r\n 234 if data_args.validation_dir is not None:\r\n 235 data_files[\"validation\"] = os.path.join(data_args.validation_dir, \"**\")\r\n--> 236 dataset = load_dataset(\r\n 237 \"imagefolder\",\r\n 238 data_files=data_files,\r\n 239 cache_dir=model_args.cache_dir,\r\n 240 task=\"image-classification\",\r\n 241 )\r\n 243 # If we don't have a validation split, split off a percentage of train as validation.\r\n 244 data_args.train_val_split = None if \"validation\" in dataset.keys() else data_args.train_val_split\r\n\r\nFile /usr/local/lib/python3.8/dist-packages/datasets/load.py:1713, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)\r\n 1711 # Rename and cast features to match task schema\r\n 1712 if task is not None:\r\n-> 1713 ds = ds.prepare_for_task(task)\r\n 1714 if save_infos:\r\n 1715 builder_instance._save_infos()\r\n\r\nFile /usr/local/lib/python3.8/dist-packages/datasets/dataset_dict.py:1272, in DatasetDict.prepare_for_task(self, task, id)\r\n 1269 @is_documented_by(Dataset.prepare_for_task)\r\n 1270 def prepare_for_task(self, task: Union[str, TaskTemplate], id: int = 0) -> \"DatasetDict\":\r\n 1271 self._check_values_type()\r\n-> 1272 return DatasetDict({k: dataset.prepare_for_task(task=task, id=id) for k, dataset in self.items()})\r\n\r\nFile /usr/local/lib/python3.8/dist-packages/datasets/dataset_dict.py:1272, in <dictcomp>(.0)\r\n 1269 @is_documented_by(Dataset.prepare_for_task)\r\n 1270 def prepare_for_task(self, task: Union[str, TaskTemplate], id: int = 0) -> \"DatasetDict\":\r\n 1271 self._check_values_type()\r\n-> 1272 return DatasetDict({k: dataset.prepare_for_task(task=task, id=id) for k, dataset in self.items()})\r\n\r\nFile /usr/local/lib/python3.8/dist-packages/datasets/arrow_dataset.py:2171, in Dataset.prepare_for_task(self, task, id)\r\n 2169 compatible_templates = [template for template in (self.info.task_templates or []) if template.task == task]\r\n 2170 if not compatible_templates:\r\n-> 2171 raise ValueError(\r\n 2172 f\"Task {task} is not compatible with this dataset! Available tasks: {list(unique_values(tasks))}\"\r\n 2173 )\r\n 2175 if not 0 <= id < len(compatible_templates):\r\n 2176 templates_list_str = \"\\n\".join(\r\n 2177 f\"- `{idx}` for task {template}\" for idx, template in enumerate(compatible_templates)\r\n 2178 )\r\n\r\nValueError: Task image-classification is not compatible with this dataset! Available tasks: []\r\n```",
"Solved it by downgrading to datasets==2.4.0",
"Cc @mariosasko",
"> Solved it by downgrading to datasets==2.4.0\r\n\r\nHi,\r\n\r\nAfter downgrading ```dataset```, I need to install ```evaluate``` again. I did it by ```pip install evaluate``` and I run the experiment. After doing so, I got this error:\r\n\r\n```\r\n[INFO|trainer.py:1628] 2022-09-29 17:35:38,675 >> ***** Running training *****\r\n[INFO|trainer.py:1629] 2022-09-29 17:35:38,676 >> Num examples = 5394\r\n[INFO|trainer.py:1630] 2022-09-29 17:35:38,676 >> Num Epochs = 10\r\n[INFO|trainer.py:1631] 2022-09-29 17:35:38,676 >> Instantaneous batch size per device = 16\r\n[INFO|trainer.py:1632] 2022-09-29 17:35:38,676 >> Total train batch size (w. parallel, distributed & accumulation) = 16\r\n[INFO|trainer.py:1633] 2022-09-29 17:35:38,676 >> Gradient Accumulation steps = 1\r\n[INFO|trainer.py:1634] 2022-09-29 17:35:38,676 >> Total optimization steps = 3380\r\n 0% 0/3380 [00:00<?, ?it/s]Traceback (most recent call last):\r\n File \"run_image_classification.py\", line 388, in <module>\r\n main()\r\n File \"run_image_classification.py\", line 362, in main\r\n train_result = trainer.train(resume_from_checkpoint=checkpoint)\r\n File \"/usr/local/lib/python3.7/dist-packages/transformers/trainer.py\", line 1525, in train\r\n ignore_keys_for_eval=ignore_keys_for_eval,\r\n File \"/usr/local/lib/python3.7/dist-packages/transformers/trainer.py\", line 1737, in _inner_training_loop\r\n for step, inputs in enumerate(epoch_iterator):\r\n File \"/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py\", line 681, in __next__\r\n data = self._next_data()\r\n File \"/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py\", line 721, in _next_data\r\n data = self._dataset_fetcher.fetch(index) # may raise StopIteration\r\n File \"/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/fetch.py\", line 49, in fetch\r\n data = [self.dataset[idx] for idx in possibly_batched_index]\r\n File \"/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/fetch.py\", line 49, in <listcomp>\r\n data = [self.dataset[idx] for idx in possibly_batched_index]\r\n File \"/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py\", line 2166, in __getitem__\r\n key,\r\n File \"/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py\", line 2151, in _getitem\r\n pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns\r\n File \"/usr/local/lib/python3.7/dist-packages/datasets/formatting/formatting.py\", line 532, in format_table\r\n return formatter(pa_table, query_type=query_type)\r\n File \"/usr/local/lib/python3.7/dist-packages/datasets/formatting/formatting.py\", line 281, in __call__\r\n return self.format_row(pa_table)\r\n File \"/usr/local/lib/python3.7/dist-packages/datasets/formatting/formatting.py\", line 387, in format_row\r\n formatted_batch = self.format_batch(pa_table)\r\n File \"/usr/local/lib/python3.7/dist-packages/datasets/formatting/formatting.py\", line 418, in format_batch\r\n return self.transform(batch)\r\n File \"run_image_classification.py\", line 315, in train_transforms\r\n _train_transforms(pil_img.convert(\"RGB\")) for pil_img in example_batch[\"image\"]\r\nKeyError: 'image'\r\n 0% 0/3380 [00:00<?, ?it/s]\r\n```\r\n\r\nDo you have any idea?\r\n\r\nThanks!",
"@NielsRogge @mariosasko ",
"Hi @NielsRogge,\r\n\r\nI have tested this issue and it seems that the problem was still there:\r\n\r\n```\r\n0% 0/3380 [00:00<?, ?it/s]Traceback (most recent call last):\r\n File \"run_image_classification.py\", line 388, in <module>\r\n main()\r\n File \"run_image_classification.py\", line 362, in main\r\n train_result = trainer.train(resume_from_checkpoint=checkpoint)\r\n File \"/usr/local/lib/python3.7/dist-packages/transformers/trainer.py\", line 1504, in train\r\n ignore_keys_for_eval=ignore_keys_for_eval,\r\n File \"/usr/local/lib/python3.7/dist-packages/transformers/trainer.py\", line 1716, in _inner_training_loop\r\n for step, inputs in enumerate(epoch_iterator):\r\n File \"/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py\", line 681, in __next__\r\n data = self._next_data()\r\n File \"/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py\", line 721, in _next_data\r\n data = self._dataset_fetcher.fetch(index) # may raise StopIteration\r\n File \"/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/fetch.py\", line 49, in fetch\r\n data = [self.dataset[idx] for idx in possibly_batched_index]\r\n File \"/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/fetch.py\", line 49, in <listcomp>\r\n data = [self.dataset[idx] for idx in possibly_batched_index]\r\n File \"/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py\", line 2166, in __getitem__\r\n key,\r\n File \"/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py\", line 2151, in _getitem\r\n pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns\r\n File \"/usr/local/lib/python3.7/dist-packages/datasets/formatting/formatting.py\", line 532, in format_table\r\n return formatter(pa_table, query_type=query_type)\r\n File \"/usr/local/lib/python3.7/dist-packages/datasets/formatting/formatting.py\", line 281, in __call__\r\n return self.format_row(pa_table)\r\n File \"/usr/local/lib/python3.7/dist-packages/datasets/formatting/formatting.py\", line 387, in format_row\r\n formatted_batch = self.format_batch(pa_table)\r\n File \"/usr/local/lib/python3.7/dist-packages/datasets/formatting/formatting.py\", line 418, in format_batch\r\n return self.transform(batch)\r\n File \"run_image_classification.py\", line 315, in train_transforms\r\n _train_transforms(pil_img.convert(\"RGB\")) for pil_img in example_batch[\"image\"]\r\nKeyError: 'image'\r\n 0% 0/3380 [00:00<?, ?it/s]\r\n```\r\n\r\nDo you have any suggestion?\r\n\r\nThanks!",
"Hi! We still need to make a patch release on the `datasets` side for my fix to take effect. In the meantime, you can install `datasets` directly from `main`:\r\n```\r\npip install git+https://github.com/huggingface/datasets.git\r\n```",
"When installing dataset directly from main, I got this error:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"run_image_classification.py\", line 36, in <module>\r\n import evaluate\r\n File \"/usr/local/lib/python3.7/dist-packages/evaluate/__init__.py\", line 37, in <module>\r\n from .hub import push_to_hub\r\n File \"/usr/local/lib/python3.7/dist-packages/evaluate/hub.py\", line 4, in <module>\r\n from datasets.utils.metadata import known_task_ids\r\nImportError: cannot import name 'known_task_ids' from 'datasets.utils.metadata' (/usr/local/lib/python3.7/dist-packages/datasets/utils/metadata.py)\r\n```",
"cc @lvwerra do we need to re-add known_task_ids to not break evaluate ? We moved those lists to the Hub, `datasets` doesn't contain any task list anymore",
"Is there a way we can get them from the Hub? Then we can replace this dependancy on `datasets`.",
"I don't think so, but the list is available here: https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts\r\n\r\nAnyway for the next release I think we need to still have the known_task_ids variable, otherwise it makes evaluate crash.\r\n\r\nDo you think you could fix this on the evaluate side and do a release ?\r\nOtherwise we can also re-add this list temporarily",
"@dxlong2000 we just released `datasets` 2.5.2 to fix this issue ;)",
"Hi @lvwerra @mariosasko,\r\n\r\nI have installed \r\n```\r\n!pip install git+https://github.com/huggingface/transformers\r\n!pip install -r requirements.txt\r\n!pip install datasets==2.5.2\r\n!pip install evaluate\r\n```\r\n\r\nand it seems that the problem is still there:\r\n\r\n```\r\n[INFO|trainer.py:1613] 2022-10-10 04:21:42,170 >> Total optimization steps = 3380\r\n 0% 0/3380 [00:00<?, ?it/s]Traceback (most recent call last):\r\n File \"run_image_classification.py\", line 388, in <module>\r\n main()\r\n File \"run_image_classification.py\", line 362, in main\r\n train_result = trainer.train(resume_from_checkpoint=checkpoint)\r\n File \"/usr/local/lib/python3.7/dist-packages/transformers/trainer.py\", line 1504, in train\r\n ignore_keys_for_eval=ignore_keys_for_eval,\r\n File \"/usr/local/lib/python3.7/dist-packages/transformers/trainer.py\", line 1716, in _inner_training_loop\r\n for step, inputs in enumerate(epoch_iterator):\r\n File \"/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py\", line 681, in __next__\r\n data = self._next_data()\r\n File \"/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py\", line 721, in _next_data\r\n data = self._dataset_fetcher.fetch(index) # may raise StopIteration\r\n File \"/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/fetch.py\", line 49, in fetch\r\n data = [self.dataset[idx] for idx in possibly_batched_index]\r\n File \"/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/fetch.py\", line 49, in <listcomp>\r\n data = [self.dataset[idx] for idx in possibly_batched_index]\r\n File \"/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py\", line 2229, in __getitem__\r\n key,\r\n File \"/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py\", line 2214, in _getitem\r\n pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns\r\n File \"/usr/local/lib/python3.7/dist-packages/datasets/formatting/formatting.py\", line 532, in format_table\r\n return formatter(pa_table, query_type=query_type)\r\n File \"/usr/local/lib/python3.7/dist-packages/datasets/formatting/formatting.py\", line 281, in __call__\r\n return self.format_row(pa_table)\r\n File \"/usr/local/lib/python3.7/dist-packages/datasets/formatting/formatting.py\", line 387, in format_row\r\n formatted_batch = self.format_batch(pa_table)\r\n File \"/usr/local/lib/python3.7/dist-packages/datasets/formatting/formatting.py\", line 418, in format_batch\r\n return self.transform(batch)\r\n File \"run_image_classification.py\", line 315, in train_transforms\r\n _train_transforms(pil_img.convert(\"RGB\")) for pil_img in example_batch[\"image\"]\r\nKeyError: 'image'\r\n 0% 0/3380 [00:00<?, ?it/s]\r\n```",
"Hi,\r\n\r\nThis is another error I think, not related to datasets or evaluate. Do you have an \"image\" column in your dataset? Did you specify `--remove_unused_columns False` when running the script?",
"This issue seems to persist in datasets 2.6.1\r\n\r\nAfter loading my dataset from the Hub (which was created locally using split-folders and then uploaded to the hub), the dataset looks ok:\r\n\r\n```\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['image', 'label'],\r\n num_rows: 4797\r\n })\r\n test: Dataset({\r\n features: ['image', 'label'],\r\n num_rows: 1200\r\n })\r\n})\r\n```\r\n\r\n, but then I get `ValueError: Task image-classification is not compatible with this dataset! Available tasks: []` when running `run_image_classification.py`",
"You can remove the `.prepare_for_task()` call in run_image_classification.py for now.\r\n\r\nThis is an issue with `datasets` not recognizing an existing task, feel free to open an issue on the `datasets` repo",
"The issue seems to persist. Is there any solution for this? `prepare_for_task()` is not called from run_image_classification.py (at least in the current version) @lhoestq ",
"```\r\nFile \"/workdir/transformer-sparsity/examples/pytorch/image-classification/run_image_classification_no_check.py\", line 431, in <module>\r\n task=\"image-classification\",\r\n File \"/opt/conda/lib/python3.7/site-packages/datasets/load.py\", line 1757, in load_dataset\r\n main()\r\n File \"/workdir/transformer-sparsity/examples/pytorch/image-classification/run_image_classification_no_check.py\", line 267, in main\r\n task=\"image-classification\",\r\n File \"/opt/conda/lib/python3.7/site-packages/datasets/load.py\", line 1757, in load_dataset\r\n ds = ds.prepare_for_task(task)\r\n File \"/opt/conda/lib/python3.7/site-packages/datasets/dataset_dict.py\", line 1278, in prepare_for_task\r\n return DatasetDict({k: dataset.prepare_for_task(task=task, id=id) for k, dataset in self.items()})\r\n File \"/opt/conda/lib/python3.7/site-packages/datasets/dataset_dict.py\", line 1278, in <dictcomp>\r\n ds = ds.prepare_for_task(task)\r\n File \"/opt/conda/lib/python3.7/site-packages/datasets/dataset_dict.py\", line 1278, in prepare_for_task\r\n return DatasetDict({k: dataset.prepare_for_task(task=task, id=id) for k, dataset in self.items()})\r\n File \"/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py\", line 2301, in prepare_for_task\r\n return DatasetDict({k: dataset.prepare_for_task(task=task, id=id) for k, dataset in self.items()})\r\n File \"/opt/conda/lib/python3.7/site-packages/datasets/dataset_dict.py\", line 1278, in <dictcomp>\r\n f\"Task {task} is not compatible with this dataset! Available tasks: {list(unique_values(tasks))}\"\r\nValueError: Task image-classification is not compatible with this dataset! Available tasks: []\r\n```",
"@yazdanbakhsh what's your transformers + datasets version?",
"@NielsRogge Thanks for the message. I am using ViT + ImageNet-1K\r\n\r\nhttps://huggingface.co/datasets/imagenet-1k\r\nhttps://huggingface.co/google/vit-base-patch16-224\r\n\r\nI am using head for installing HuggingFace.",
"Forgot to mention that I am using offline datasets. I pass the directory with arrow format. ",
"My other solution is to use \"load_from_disk\" (I am adding this option to `run_image_classifiction`). I will update the issue if it works. ",
"@NielsRogge I think the issue is that `self.info = self._load_info()` is not called when HF_DATASET_OFFLINE is true. \r\n\r\nhttps://github.com/huggingface/datasets/blob/232a43943e87dfedcc328a9a3d3b4d89ea5c6627/src/datasets/builder.py#L788"
] | 1,663
| 1,671
| 1,664
|
NONE
| null |
### System Info
Dear all,
Thank you for your great work. I have tried to run ```image-classification``` example on my simple dataset and I got this error. The version of transformers I used is the newest one. Do you have any idea what happened?
Thanks very much!
```WARNING:datasets.builder:Using custom data configuration default-9107176dcf18ce11
WARNING:datasets.builder:Found cached dataset imagefolder (/root/.cache/huggingface/datasets/imagefolder/default-9107176dcf18ce11/0.0.0/e872d3ec27c6c200a8881a4af52930df7eca3372b19aa4d0f5db74a2fded8141)
100% 1/1 [00:00<00:00, 45.80it/s]
Traceback (most recent call last):
File "run_image_classification.py", line 388, in <module>
main()
File "run_image_classification.py", line 240, in main
task="image-classification",
File "/usr/local/lib/python3.7/dist-packages/datasets/load.py", line 1713, in load_dataset
ds = ds.prepare_for_task(task)
File "/usr/local/lib/python3.7/dist-packages/datasets/dataset_dict.py", line 1272, in prepare_for_task
return DatasetDict({k: dataset.prepare_for_task(task=task, id=id) for k, dataset in self.items()})
File "/usr/local/lib/python3.7/dist-packages/datasets/dataset_dict.py", line 1272, in <dictcomp>
return DatasetDict({k: dataset.prepare_for_task(task=task, id=id) for k, dataset in self.items()})
File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py", line 2172, in prepare_for_task
f"Task {task} is not compatible with this dataset! Available tasks: {list(unique_values(tasks))}"
ValueError: Task image-classification is not compatible with this dataset! Available tasks: []
```
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
!CUDA_DIVISIBLE_DEVICES=0, python3 run_image_classification.py \
--model_name_or_path facebook/convnext-tiny-224 \
--train_dir $TRAIN_DIR \
--output_dir $OUTPUT_DIR \
--do_train \
--do_eval \
--learning_rate 1e-5 \
--num_train_epochs 10 \
--per_device_train_batch_size 16 \
--per_device_eval_batch_size 16 \
--logging_strategy steps \
--logging_steps 10 \
--evaluation_strategy epoch \
--save_strategy epoch \
--load_best_model_at_end True \
--save_total_limit 3 \
--seed 1337 \
--overwrite_output_dir
### Expected behavior
ValueError: Task image-classification is not compatible with this dataset! Available tasks: []
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19177/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19177/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19176
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19176/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19176/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19176/events
|
https://github.com/huggingface/transformers/pull/19176
| 1,384,351,618
|
PR_kwDOCUB6oc4_hraL
| 19,176
|
Bump protobuf from 3.19.4 to 3.19.5 in /examples/research_projects/decision_transformer
|
{
"login": "dependabot[bot]",
"id": 49699333,
"node_id": "MDM6Qm90NDk2OTkzMzM=",
"avatar_url": "https://avatars.githubusercontent.com/in/29110?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dependabot%5Bbot%5D",
"html_url": "https://github.com/apps/dependabot",
"followers_url": "https://api.github.com/users/dependabot%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/dependabot%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/dependabot%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dependabot%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dependabot%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/dependabot%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/dependabot%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/dependabot%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/dependabot%5Bbot%5D/received_events",
"type": "Bot",
"site_admin": false
}
|
[
{
"id": 1905493434,
"node_id": "MDU6TGFiZWwxOTA1NDkzNDM0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies",
"name": "dependencies",
"color": "0366d6",
"default": false,
"description": "Pull requests that update a dependency file"
}
] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,663
| 1,664
| 1,664
|
CONTRIBUTOR
| null |
Bumps [protobuf](https://github.com/protocolbuffers/protobuf) from 3.19.4 to 3.19.5.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/protocolbuffers/protobuf/releases">protobuf's releases</a>.</em></p>
<blockquote>
<h2>Protocol Buffers v3.19.5</h2>
<h1>C++</h1>
<ul>
<li>Reduce memory consumption of MessageSet parsing</li>
<li>This release addresses a <a href="https://github.com/protocolbuffers/protobuf/security/advisories/GHSA-8gq9-2x98-w8hf">Security Advisory for C++ and Python users</a></li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/protocolbuffers/protobuf/commit/b464cfbee18c71c40e761a5273ad369f3547294b"><code>b464cfb</code></a> Updating changelog</li>
<li><a href="https://github.com/protocolbuffers/protobuf/commit/40859fb1c03bfbffe10cdb8009d08ff7e8d8a2f2"><code>40859fb</code></a> Updating version.json and repo version numbers to: 19.5</li>
<li><a href="https://github.com/protocolbuffers/protobuf/commit/3b175f173903c934f5ba0d1726b430ddbce7ea56"><code>3b175f1</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/protocolbuffers/protobuf/issues/10543">#10543</a> from deannagarcia/3.19.x</li>
<li><a href="https://github.com/protocolbuffers/protobuf/commit/c05b5f3755af2f6a05c37cb0930373ac3e37463f"><code>c05b5f3</code></a> Add missing includes</li>
<li><a href="https://github.com/protocolbuffers/protobuf/commit/0299c03005fbfe086d8394fb7a873a8a21fe327f"><code>0299c03</code></a> Apply patch</li>
<li><a href="https://github.com/protocolbuffers/protobuf/commit/0a722f1573e629f8c3adc8fd4d298522b667548c"><code>0a722f1</code></a> Update version.json with "lts": true (<a href="https://github-redirect.dependabot.com/protocolbuffers/protobuf/issues/10533">#10533</a>)</li>
<li><a href="https://github.com/protocolbuffers/protobuf/commit/d5eb60a56081930c706198e459480ab3204e435c"><code>d5eb60a</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/protocolbuffers/protobuf/issues/10530">#10530</a> from protocolbuffers/deannagarcia-patch-6</li>
<li><a href="https://github.com/protocolbuffers/protobuf/commit/6cf1f78c27c15ae66fb7714798c82de24d4aa2a8"><code>6cf1f78</code></a> Update version.json</li>
<li><a href="https://github.com/protocolbuffers/protobuf/commit/97fc8447c7b2441bff9b5be02d0964bfe4926302"><code>97fc844</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/protocolbuffers/protobuf/issues/10504">#10504</a> from deannagarcia/3.19.x</li>
<li><a href="https://github.com/protocolbuffers/protobuf/commit/29d60a2fa478d3c222a615c39cbf29918f194877"><code>29d60a2</code></a> Add version file</li>
<li>Additional commits viewable in <a href="https://github.com/protocolbuffers/protobuf/compare/v3.19.4...v3.19.5">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
- `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language
- `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language
- `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language
- `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details>
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19176/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19176/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19176",
"html_url": "https://github.com/huggingface/transformers/pull/19176",
"diff_url": "https://github.com/huggingface/transformers/pull/19176.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19176.patch",
"merged_at": 1664196916000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19175
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19175/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19175/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19175/events
|
https://github.com/huggingface/transformers/pull/19175
| 1,384,177,502
|
PR_kwDOCUB6oc4_hHNs
| 19,175
|
Poc to use safetensors
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Thank you for being verified!",
"_The documentation is not available anymore as the PR was closed or merged._",
"This looks so clean !",
"Opened Hub Pull requests to convert the following weights:\r\n- https://huggingface.co/roberta-base/discussions/3\r\n- https://huggingface.co/roberta-large/discussions/1\r\n- https://huggingface.co/gpt2/discussions/6\r\n- https://huggingface.co/Jean-Baptiste/camembert-ner/discussions/1\r\n- https://huggingface.co/openai/clip-vit-large-patch14/discussions/5\r\n\r\n^will merge the 3 canonical ones to be able to test easily",
"note that you can also test from un-merged Hub PRs, you just have to pass the `refs/pr/:id` as a `revision`, for instance for gpt2:\r\n\r\n```python\r\nmodel = AutoModelForCausalLM.from_pretrained(\"gpt2\", revision=\"refs/pr/6\")\r\n```\r\n"
] | 1,663
| 1,664
| 1,664
|
COLLABORATOR
| null |
# What does this PR do?
This PR introduces the necessary code changes to use safetensors weight file as a primary source.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19175/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19175/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19175",
"html_url": "https://github.com/huggingface/transformers/pull/19175",
"diff_url": "https://github.com/huggingface/transformers/pull/19175.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19175.patch",
"merged_at": 1664549884000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19174
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19174/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19174/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19174/events
|
https://github.com/huggingface/transformers/pull/19174
| 1,384,146,278
|
PR_kwDOCUB6oc4_hAwY
| 19,174
|
Improving TrOCR results with LM 🚀
|
{
"login": "gagan3012",
"id": 49101362,
"node_id": "MDQ6VXNlcjQ5MTAxMzYy",
"avatar_url": "https://avatars.githubusercontent.com/u/49101362?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gagan3012",
"html_url": "https://github.com/gagan3012",
"followers_url": "https://api.github.com/users/gagan3012/followers",
"following_url": "https://api.github.com/users/gagan3012/following{/other_user}",
"gists_url": "https://api.github.com/users/gagan3012/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gagan3012/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gagan3012/subscriptions",
"organizations_url": "https://api.github.com/users/gagan3012/orgs",
"repos_url": "https://api.github.com/users/gagan3012/repos",
"events_url": "https://api.github.com/users/gagan3012/events{/privacy}",
"received_events_url": "https://api.github.com/users/gagan3012/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19174). All of your documentation changes will be reflected on that endpoint.",
"Hi,\r\n\r\nThanks for your PR! Wonder if it makes sense to add another language model on top of the decoder of TrOCR (which already is a language model)? For instance, beam search is already supported (as you can use the [generate](https://huggingface.co/docs/transformers/v4.22.2/en/main_classes/text_generation#transformers.generation_utils.GenerationMixin.generate) method to autoregressively generate text). What would be the benefit of adding this other decoder on top?\r\n\r\nDid you see a boost in performance?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,663
| 1,670
| 1,670
|
NONE
| null |
# What does this PR do?
This PR adds `TrOCRProcessorWithLM` which gives us the ability to add a Kenlm Langauge model to improve the metrics results of TrOCR
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@NielsRogge @patrickvonplaten @patil-suraj
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19174/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19174/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19174",
"html_url": "https://github.com/huggingface/transformers/pull/19174",
"diff_url": "https://github.com/huggingface/transformers/pull/19174.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19174.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/19173
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19173/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19173/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19173/events
|
https://github.com/huggingface/transformers/pull/19173
| 1,384,008,784
|
PR_kwDOCUB6oc4_glX2
| 19,173
|
Fix doctest for `TFDeiTForImageClassification`
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"> LGTM! 👍\r\n> \r\n> is the line that sets the non-Keras seed still needed? (`>>> tf.random.set_seed(3)`)\r\n\r\nNo we don't need it :-) Removed it.",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,663
| 1,664
| 1,664
|
COLLABORATOR
| null |
# What does this PR do?
Since TF 2.10, `tf.random.set_seed` with a fixed seed won't give the same model weights anymore. See the [release note](https://github.com/tensorflow/tensorflow/releases/tag/v2.10.0). We need `tf.keras.utils.set_random_seed()` for this purpose.
This PR fixes the doctest for `TFDeiTForImageClassification` by using the above solution.
- I have to update the expected value however.
- I get the new expected value on a CPU VM. It should work on the GPU VM too, but let's keep an eye on the CI result.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19173/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19173/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19173",
"html_url": "https://github.com/huggingface/transformers/pull/19173",
"diff_url": "https://github.com/huggingface/transformers/pull/19173.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19173.patch",
"merged_at": 1664373202000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19172
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19172/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19172/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19172/events
|
https://github.com/huggingface/transformers/pull/19172
| 1,383,995,853
|
PR_kwDOCUB6oc4_gioi
| 19,172
|
Maskformer post-processing fixes and improvements
|
{
"login": "alaradirik",
"id": 8944735,
"node_id": "MDQ6VXNlcjg5NDQ3MzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/8944735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alaradirik",
"html_url": "https://github.com/alaradirik",
"followers_url": "https://api.github.com/users/alaradirik/followers",
"following_url": "https://api.github.com/users/alaradirik/following{/other_user}",
"gists_url": "https://api.github.com/users/alaradirik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alaradirik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alaradirik/subscriptions",
"organizations_url": "https://api.github.com/users/alaradirik/orgs",
"repos_url": "https://api.github.com/users/alaradirik/repos",
"events_url": "https://api.github.com/users/alaradirik/events{/privacy}",
"received_events_url": "https://api.github.com/users/alaradirik/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@sgugger @amyeroberts @NielsRogge all comments are addressed, could you approve the PR if everything looks good?"
] | 1,663
| 1,664
| 1,664
|
CONTRIBUTOR
| null |
# What does this PR do?
- Improves MaskFormer docs, corrects minor typos
- Restructures `MaskFormerFeatureExtractor.post_process_panoptic_segmentation` for better readability, adds target_sizes argument for optional resizing
- Adds `post_process_semantic_segmentation` and `post_process_instance_segmentation` methods.
- Adds a deprecation warning to `post_process_segmentation` method in favour of `post_process_instance_segmentation`
Notes:
This PR is part of a larger effort to ensure consistency of post-processing methods across segmentation models, to define common arguments and outputs, and get ImageSegmentationPipeline working with all available models.
- `post_process_semantic_segmentation` returns segmentations as tensors of shape (height, width), which is consistent with the COCO format
- `post_process_instance_segmentation` returns segmentations either in the same format as the panoptic method or optionally in run-length encoded format (if return_coco_format is set to True).
- `post_process_semantic_segmentation` currently has an inconsistent input argument (target_size instead of target_sizes) and output (3D tensor instead of list of 2D tensors)
## Before submitting
- [X ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19172/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19172/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19172",
"html_url": "https://github.com/huggingface/transformers/pull/19172",
"diff_url": "https://github.com/huggingface/transformers/pull/19172.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19172.patch",
"merged_at": 1664972835000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19171
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19171/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19171/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19171/events
|
https://github.com/huggingface/transformers/pull/19171
| 1,383,945,792
|
PR_kwDOCUB6oc4_gYEI
| 19,171
|
german training, accelerate and model sharing
|
{
"login": "flozi00",
"id": 47894090,
"node_id": "MDQ6VXNlcjQ3ODk0MDkw",
"avatar_url": "https://avatars.githubusercontent.com/u/47894090?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/flozi00",
"html_url": "https://github.com/flozi00",
"followers_url": "https://api.github.com/users/flozi00/followers",
"following_url": "https://api.github.com/users/flozi00/following{/other_user}",
"gists_url": "https://api.github.com/users/flozi00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/flozi00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/flozi00/subscriptions",
"organizations_url": "https://api.github.com/users/flozi00/orgs",
"repos_url": "https://api.github.com/users/flozi00/repos",
"events_url": "https://api.github.com/users/flozi00/events{/privacy}",
"received_events_url": "https://api.github.com/users/flozi00/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"waiting for the tests and docs rendering by HF docs bot to take a final view",
"You might need an empty commit to re-trigger the doc build job.",
"_The documentation is not available anymore as the PR was closed or merged._",
"seems like all links are working now\r\nready to merge from my side"
] | 1,663
| 1,663
| 1,663
|
CONTRIBUTOR
| null |
another continue of https://github.com/huggingface/transformers/issues/18564 @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19171/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19171/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19171",
"html_url": "https://github.com/huggingface/transformers/pull/19171",
"diff_url": "https://github.com/huggingface/transformers/pull/19171.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19171.patch",
"merged_at": 1663959130000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19170
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19170/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19170/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19170/events
|
https://github.com/huggingface/transformers/pull/19170
| 1,383,916,664
|
PR_kwDOCUB6oc4_gR0T
| 19,170
|
Separate Push CI images from Scheduled CI
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Added in `Transformers testing design, internal document` on Notion.\r\n\r\n\r\n<img width=\"545\" alt=\"Screenshot 2022-09-23 191436\" src=\"https://user-images.githubusercontent.com/2521628/192018244-918525d6-f9a8-4077-a8a1-e1ccc2b47a3b.png\">\r\n\r\n### Text version\r\n\r\nThe CI for a push event (to main branch) will check if setup.py is changed. If yes, it will launch the docker image build CI before launching the actual tests. This is to make sure the tests will run against the specified package versions in setup.py. In order to avoid the conflict with the daily schedule CI, which should use the same image version for all jobs during a workflow run, we separate the CI images used for push events and schedule CI. The docker images used for push events start with the tag of the images used in the corresponding jobs in scheduled CI, but with a postfix push-ci. For example, transformers-all-latest-gpu in schedule CI will be transformers-all-latest-gpu-push-ci in push CI.\r\n"
] | 1,663
| 1,664
| 1,664
|
COLLABORATOR
| null |
# What does this PR do?
⚠️ **Before merge, I need to build the new push CI images that have the new tags.**
Currently, if `setup.py` is changed, Push CI will re-build the CI images before running tests.
https://github.com/huggingface/transformers/blob/7e84723fe4e9a232e5e27dc38aed373c0c7ab94a/.github/workflows/self-push-caller.yml#L39-L43
However, this may cause different jobs in a scheduled CI workflow run to use images with different versions.
Recently, when `tokenizers` is changed to `0.13`, some jobs failed in the scheduled CI due to the new image (with `tokenizers 0.13`) but the `transformers` code in those runs still required `tokenizers < 0.13`.
**This PR separates the push CI images from scheduled CI.**
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19170/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19170/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19170",
"html_url": "https://github.com/huggingface/transformers/pull/19170",
"diff_url": "https://github.com/huggingface/transformers/pull/19170.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19170.patch",
"merged_at": 1664182543000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19169
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19169/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19169/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19169/events
|
https://github.com/huggingface/transformers/pull/19169
| 1,383,646,402
|
PR_kwDOCUB6oc4_fYrG
| 19,169
|
Add offline runners info in the Slack report
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,663
| 1,663
| 1,663
|
COLLABORATOR
| null |
# What does this PR do?
So we see which runners are offline directly in the report.
Currently, this information is added only if the check is run through `check_runner_status.yml`, where all runners are checked but reported to scheduled CI channel. Adding this information avoids confusion in the case where push/doctest CI runners are offline but reported in the scheduled CI channel.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19169/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19169/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19169",
"html_url": "https://github.com/huggingface/transformers/pull/19169",
"diff_url": "https://github.com/huggingface/transformers/pull/19169.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19169.patch",
"merged_at": 1663953785000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19168
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19168/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19168/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19168/events
|
https://github.com/huggingface/transformers/pull/19168
| 1,383,546,429
|
PR_kwDOCUB6oc4_fDzN
| 19,168
|
fix HPO DDP GPU problem
|
{
"login": "sywangyi",
"id": 36058628,
"node_id": "MDQ6VXNlcjM2MDU4NjI4",
"avatar_url": "https://avatars.githubusercontent.com/u/36058628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sywangyi",
"html_url": "https://github.com/sywangyi",
"followers_url": "https://api.github.com/users/sywangyi/followers",
"following_url": "https://api.github.com/users/sywangyi/following{/other_user}",
"gists_url": "https://api.github.com/users/sywangyi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sywangyi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sywangyi/subscriptions",
"organizations_url": "https://api.github.com/users/sywangyi/orgs",
"repos_url": "https://api.github.com/users/sywangyi/repos",
"events_url": "https://api.github.com/users/sywangyi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sywangyi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"@sgugger @@spigo900 please try with PR, it works for me to do HPO DDP with GPU",
"@yao-matrix",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,663
| 1,666
| 1,663
|
CONTRIBUTOR
| null |
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
https://github.com/huggingface/transformers/issues/18609
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
- trainer: @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19168/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19168/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19168",
"html_url": "https://github.com/huggingface/transformers/pull/19168",
"diff_url": "https://github.com/huggingface/transformers/pull/19168.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19168.patch",
"merged_at": 1663938816000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19166
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19166/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19166/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19166/events
|
https://github.com/huggingface/transformers/pull/19166
| 1,383,411,739
|
PR_kwDOCUB6oc4_enMx
| 19,166
|
Add WhisperModel to transformers
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] |
closed
| false
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Going forward, let's check generation with:\r\n```python\r\n#!/usr/bin/env python3\r\nimport whisper\r\nimport jiwer\r\nimport numpy as np\r\nimport torch\r\nfrom datasets import load_dataset\r\nfrom transformers import WhisperForConditionalGeneration, WhisperProcessor, WhisperTokenizer\r\nfrom whisper.normalizers import EnglishTextNormalizer\r\n\r\nnormalizer = EnglishTextNormalizer()\r\n\r\n\r\nmodel = WhisperForConditionalGeneration.from_pretrained(\"openai/whisper-base.en\")\r\nprocessor = WhisperProcessor.from_pretrained(\"openai/whisper-base.en\")\r\n\r\ndevice = \"cuda\"\r\nmodel = model.to(device).eval()\r\n\r\n\r\ndef map_fn(batch):\r\n arrays = [x[\"array\"] for x in batch[\"audio\"]]\r\n\r\n # -> here is a bug\r\n input_features = processor.feature_extractor(arrays, padding=\"max_length\", max_length=480_000, return_tensors=\"pt\").input_features\r\n input_features = input_features.to(device)\r\n\r\n model.config.use_cache = False\r\n sequences = model.generate(input_features, max_length=224, forced_bos_token_id=50362, decoder_start_token_id=50257)\r\n results = processor.tokenizer.batch_decode(sequences, skip_special_tokens=True)\r\n\r\n batch[\"hypotheses\"] = [normalizer(result) for result in results]\r\n batch[\"reference\"] = [normalizer(text) for text in batch[\"text\"]]\r\n return batch\r\n\r\n\r\nds = load_dataset(\"hf-internal-testing/librispeech_asr_dummy\", \"clean\", split=\"validation\")\r\nds = ds.map(map_fn, batch_size=16, remove_columns=ds.column_names, batched=True)\r\n\r\nwer = jiwer.wer(list(ds[\"reference\"]), list(ds[\"hypotheses\"]))\r\nprint(\"Wer\", wer)\r\n```",
"Failing tests are related to the `RAG` model that re-uses the generate function.",
"Ready I think @patrickvonplaten ",
"Hey @patrickvonplaten and @sgugger the PR is ready for a final review! 🤗 ",
"Could we also add a script that runs each checkpoint in a 5-linear as discussed on Slack here?\r\nThese code snippets could then be added to the respective model cards",
"Okay, so here is a simple example : \r\n\r\n```python\r\n>>> model = WhisperForConditionalGeneration.from_pretrained(f\"openai/whisper-large\")\r\n>>> processor = WhisperProcessor.from_pretrained(f\"openai/whisper-large\")\r\n\r\n>>> ds = load_dataset(\"common_voice\", \"ja\", split=\"test\", streaming=True)\r\n>>> ds = ds.cast_column(\"audio\", datasets.Audio(sampling_rate=16_000))\r\n>>> ds_iter = iter(ds)\r\n>>> input_speech = next(ds_iter)[\"audio\"][\"array\"]\r\n>>> inputs = processor(input_speech, return_tensors = \"pt\")\r\n\r\n>>> predicted_ids = model.generate(**inputs)\r\n>>> processor.tokenizer.batch_decode(predicted_ids, skip_special_tokens=True, normalize = True)[0]\r\n'i borrowed a phone from kimura san'\r\n\r\n>>> forced_decoder_ids = processor.get_decoder_prompt_ids(language = \"ja\", task = \"transcribe\")\r\n>>> predicted_ids = model.generate(**inputs, forced_decoder_ids=forced_decoder_ids)\r\n>>> processor.tokenizer.batch_decode(predicted_ids, skip_special_tokens=True)[0]\r\n\"木村さんに電話を貸してもらいました\"\r\n\r\n>>> forced_decoder_ids = processor.get_decoder_prompt_ids(language = \"en\", task = \"transcribe\")\r\n>>> predicted_ids = model.generate(**inputs, forced_decoder_ids=forced_decoder_ids)\r\n>>> processor.tokenizer.batch_decode(predicted_ids, skip_special_tokens=True)[0]\r\n' Kimura san ni denwa wo kaite moraimashita'\r\n```",
"2 final things:\r\n\r\n- Add 2 tests for batched generation\r\n- Make sure the tokenizer has a pad_token_id => it should be identical to the eos_token_id since there is no official one. We don't want to trigger a warning every time we run generation in batch\r\n- Also make sure that `config.pad_token_id` is correctly set.\r\n\r\ncc @sanchit-gandhi we have to remember this when doing fine-tuning experiments! Whisper has `pad_token_id == eos_token_id` which means that during training we need to make sure in our general training scripts that we don't replace the `eos_token_id` with `-100` and thus ignore it in the loss. Instead we should only replace the \"not-first\" pad_token_id with `-100` (we have the same for GPT2 BTW) ",
"Hello!\r\nI apologize for interrupting the development process. But I'm following this thread, because I'm really looking forward to the Whisper at HF, and here I also see words about fine tuning. It will be very cool if you can make good fine tuning and code examples!\r\n\r\nI myself am already trying to finetune in different ways, but so far the model is only being unlearned.\r\n\r\nIn any case, thanks for your work and good luck! ❤️ ",
"> Hello! I apologize for interrupting the development process. But I'm following this thread, because I'm really looking forward to the Whisper at HF, and here I also see words about fine tuning. It will be very cool if you can make good fine tuning and code examples!\r\n> \r\n> I myself am already trying to finetune in different ways, but so far the model is only being unlearned.\r\n> \r\n> In any case, thanks for your work and good luck! heart\r\n\r\nHey @ArtyomZemlyak, \r\n\r\nThis is a major focus of our right now! We've already done some experiments - you can check it here: \r\nhttps://openreview.net/forum?id=9OL2fIfDLK (we've fine-tuned whisper on a bunch of open-source datasets)\r\n\r\nWe hope to have a well-functioning fine-tuning script by early next week (we plan on doing a blog post + google colab)",
"Merging to unblock TF PR",
"Awesome, sorry for the delay! "
] | 1,663
| 1,665
| 1,665
|
COLLABORATOR
| null |
# What does this PR do?
Adds Whisper to transformers
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19166/reactions",
"total_count": 20,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 6,
"confused": 0,
"heart": 5,
"rocket": 9,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19166/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19166",
"html_url": "https://github.com/huggingface/transformers/pull/19166",
"diff_url": "https://github.com/huggingface/transformers/pull/19166.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19166.patch",
"merged_at": 1665001712000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19165
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19165/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19165/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19165/events
|
https://github.com/huggingface/transformers/issues/19165
| 1,383,378,920
|
I_kwDOCUB6oc5SdK_o
| 19,165
|
would huggingface like support cpp env libtorch or rewrite the core code for cpp ?
|
{
"login": "mullerhai",
"id": 6143404,
"node_id": "MDQ6VXNlcjYxNDM0MDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/6143404?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mullerhai",
"html_url": "https://github.com/mullerhai",
"followers_url": "https://api.github.com/users/mullerhai/followers",
"following_url": "https://api.github.com/users/mullerhai/following{/other_user}",
"gists_url": "https://api.github.com/users/mullerhai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mullerhai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mullerhai/subscriptions",
"organizations_url": "https://api.github.com/users/mullerhai/orgs",
"repos_url": "https://api.github.com/users/mullerhai/repos",
"events_url": "https://api.github.com/users/mullerhai/events{/privacy}",
"received_events_url": "https://api.github.com/users/mullerhai/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,663
| 1,666
| 1,666
|
NONE
| null |
### System Info
libtorch 1.21
macos
Hi:
I want to use huggingface ,but ,I find the model api almost python api, I use the cpp env for libtorch , would you like support cpp ,or rewrite the code with cpp?
thanks
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
no
### Expected behavior
no
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19165/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19165/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19164
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19164/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19164/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19164/events
|
https://github.com/huggingface/transformers/issues/19164
| 1,383,324,990
|
I_kwDOCUB6oc5Sc90-
| 19,164
|
Evaluation of wav2vec2 model all labeled string return "<unk>" value
|
{
"login": "lgq-liao",
"id": 12348652,
"node_id": "MDQ6VXNlcjEyMzQ4NjUy",
"avatar_url": "https://avatars.githubusercontent.com/u/12348652?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lgq-liao",
"html_url": "https://github.com/lgq-liao",
"followers_url": "https://api.github.com/users/lgq-liao/followers",
"following_url": "https://api.github.com/users/lgq-liao/following{/other_user}",
"gists_url": "https://api.github.com/users/lgq-liao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lgq-liao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lgq-liao/subscriptions",
"organizations_url": "https://api.github.com/users/lgq-liao/orgs",
"repos_url": "https://api.github.com/users/lgq-liao/repos",
"events_url": "https://api.github.com/users/lgq-liao/events{/privacy}",
"received_events_url": "https://api.github.com/users/lgq-liao/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Are the labels in lower case?",
"Maybe of interest to @sanchit-gandhi as well",
"> Are the labels in lower case?\r\n\r\nYes, it's lower case\r\n\r\nHere is the csv file looks like:\r\n$ head dataset.csv \r\npath,transcription\r\nwav/000010001.WAV,there were barrels of wine in the huge cellar\r\nwav/000010002.WAV,she won a car because she was the twelfth person to call the radio station\r\nwav/000010003.WAV,as they walked back they were shocked to see a pack of stray dogs circling around the car",
"Ok I think that's the issue. Your vocabulary likely only contains upper case letters. The tokenizer doesn't recognise lower case letters so it uses `<unk>` instead.\r\n\r\nTry converting your transcription column to upper case and see if that fixes it.",
"> Ok I think that's the issue. Your vocabulary likely only contains upper case letters. The tokenizer doesn't recognise lower case letters so it uses `<unk>` instead.\r\n> \r\n> Try converting your transcription column to upper case and see if that fixes it.\r\n\r\nYeah, that is the root cause. After I changed it to upper case, the issue go aways:\r\nThank you so much for the troubleshoot. \r\n\r\n***** Running Evaluation *****\r\n Num examples = 91\r\n Batch size = 4\r\n100%|███████████████████████████████████████████| 23/23 [00:03<00:00, 6.42it/s]pred_str[0]: THERE WERE BARRELS OF WINE IN THE SHU CELLOR\r\nlabel_str[0]: THERE WERE BARRELS OF WINE IN THE HUGE CELLAR\r\n100%|███████████████████████████████████████████| 23/23 [00:03<00:00, 6.68it/s]\r\n***** eval metrics *****\r\n eval_loss = 118.7373\r\n eval_runtime = 0:00:05.39\r\n eval_samples = 91\r\n eval_samples_per_second = 16.856\r\n eval_steps_per_second = 4.26\r\n eval_wer = 0.1228\r\n\r\n\r\n"
] | 1,663
| 1,664
| 1,664
|
NONE
| null |
### System Info
- `transformers` version: 4.22.0.dev0
- Platform: Linux-5.15.0-48-generic-x86_64-with-glibc2.10
- Python version: 3.8.8
- Huggingface_hub version: 0.8.1
- PyTorch version (GPU?): 1.12.1+cu116 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Both have same issue
- $ pip freeze |grep datasets
datasets==2.4.0
### Who can help?
@patrickvonplaten
@anton-l
@sanchit-gandhi
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Steps to reproduce the issue:
1. Download the [issue_report](https://drive.google.com/drive/folders/1M5xE4L_HBxBQynWyl6f1c-tJtat027d1?usp=sharing) folder to your local
2. open a command prompt and cd to the issue_report
3. run eval cmd: python ctc_finetune.py --eval
4. The loss is 1.0086 as the value of label_str always "unk", which printed at the line # 566 of [ctc_finetune.py](https://drive.google.com/file/d/1NogO0G8-RtLGaisfcmBrXh6ESKZbWfZK/view?usp=sharing)
5. To re-generate the datase cache files, please run : python customise_dataset.py
Here is the log printed at end of evaluation as following, please see the [full_log.log](https://drive.google.com/file/d/1hIMmxfLXOEm_Sx_g3100MvR2so1sthEM/view?usp=sharing) for more details :
***** Running Evaluation *****
Num examples = 91
Batch size = 4
100%|███████████████████████████████████████████| 23/23 [00:03<00:00, 5.54it/s]
**pred_str[0]: THERE WERE BARRELS OF WINE IN THE SHU CELLOR
label_str[0]: <unk><unk><unk><unk><unk> <unk><unk><unk><unk> <u**nk><unk><unk><unk><unk><unk><unk> <unk><unk> <unk><unk><unk><unk> <unk><unk> <unk><unk><unk> <unk><unk><unk><unk> <unk><unk><unk><unk><unk><unk>
100%|███████████████████████████████████████████| 23/23 [00:03<00:00, 6.12it/s]
***** eval metrics *****
eval_loss = 4704.6416
eval_runtime = 0:00:06.64
eval_samples = 91
eval_samples_per_second = 13.697
eval_steps_per_second = 3.462
eval_wer = 1.0086
### Expected behavior
As I use original pre-trained model : facebook/wav2vec2-large-robust-ft-libri-960h for evaluation, the only changes is my customized dataset.
I could not figure out where is wrong with my own modified scripts which had just minor change from the official example scripts.
So I am not sure my encountered issue whether it's my scripts issue or the finetune libs issue.
Thanks in advance for helping me on this matter.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19164/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19164/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19163
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19163/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19163/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19163/events
|
https://github.com/huggingface/transformers/pull/19163
| 1,383,007,206
|
PR_kwDOCUB6oc4_dTyZ
| 19,163
|
Move AutoClasses under Main Classes
|
{
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1834067346,
"node_id": "MDU6TGFiZWwxODM0MDY3MzQ2",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Documentation",
"name": "Documentation",
"color": "77cc3b",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,663
| 1,664
| 1,664
|
MEMBER
| null |
This PR proposes moving `Auto Classes` under the `Main Classes` section instead. As pointed out by @patrickvonplaten, the docs look a little strange because the `Auto Classes` doc isn't bold like the rest of the section and the header name doesn't exactly fit. Since bolded titles mean it is expandable we can't put `Auto Classes` in bold to align it with the rest of the section. `Auto Classes` also includes other things besides models (config, tokenizer, processor, etc.), so it doesn't exactly fit under `Models`.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19163/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19163/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19163",
"html_url": "https://github.com/huggingface/transformers/pull/19163",
"diff_url": "https://github.com/huggingface/transformers/pull/19163.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19163.patch",
"merged_at": 1664410170000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19162
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19162/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19162/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19162/events
|
https://github.com/huggingface/transformers/pull/19162
| 1,382,827,643
|
PR_kwDOCUB6oc4_cuCa
| 19,162
|
Fix TrainingArguments documentation
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"In retrospect I really should have foreseen that one, I'm sorry!"
] | 1,663
| 1,663
| 1,663
|
COLLABORATOR
| null |
# What does this PR do?
@Rocketknight1 added a new class variable for `TrainingArguments` but put it before the docstring, which erased all doc for this class :grimacing: This PR fixes that.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19162/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19162/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19162",
"html_url": "https://github.com/huggingface/transformers/pull/19162",
"diff_url": "https://github.com/huggingface/transformers/pull/19162.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19162.patch",
"merged_at": 1663871912000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19161
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19161/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19161/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19161/events
|
https://github.com/huggingface/transformers/issues/19161
| 1,382,818,942
|
I_kwDOCUB6oc5SbCR-
| 19,161
|
Can't load too big text file for dataset (RAM exhausts), HELP PLEASE !!!!!
|
{
"login": "mv96",
"id": 14794584,
"node_id": "MDQ6VXNlcjE0Nzk0NTg0",
"avatar_url": "https://avatars.githubusercontent.com/u/14794584?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mv96",
"html_url": "https://github.com/mv96",
"followers_url": "https://api.github.com/users/mv96/followers",
"following_url": "https://api.github.com/users/mv96/following{/other_user}",
"gists_url": "https://api.github.com/users/mv96/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mv96/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mv96/subscriptions",
"organizations_url": "https://api.github.com/users/mv96/orgs",
"repos_url": "https://api.github.com/users/mv96/repos",
"events_url": "https://api.github.com/users/mv96/events{/privacy}",
"received_events_url": "https://api.github.com/users/mv96/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"(duplicate of #19199)"
] | 1,663
| 1,664
| 1,664
|
NONE
| null |
Hello All,
I am trying to load a big text file for pertaining a BERT model from scratch the size of the txt file is about 11GB and trying to load the model for pertaining is exhausting all the RAM on the system.
Is it possible to load all the data in batches and then perform training.
I am a bit new to hugging face ecosystem so I would request you all to help me around if you have any clue about this.
I am using Google Colab for the purpose.
Please share code snippets, If possible.
Cheers !
`#construct dataset
from transformers import LineByLineTextDataset
file_path="/content/drive/MyDrive/full_text_data.txt"
dataset = LineByLineTextDataset(
tokenizer=tokenizer,
file_path=file_path,
block_size=32)`
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19161/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19161/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19159
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19159/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19159/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19159/events
|
https://github.com/huggingface/transformers/pull/19159
| 1,382,558,190
|
PR_kwDOCUB6oc4_b1xd
| 19,159
|
Fix ckpt paths in ViT MSN
|
{
"login": "sayakpaul",
"id": 22957388,
"node_id": "MDQ6VXNlcjIyOTU3Mzg4",
"avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sayakpaul",
"html_url": "https://github.com/sayakpaul",
"followers_url": "https://api.github.com/users/sayakpaul/followers",
"following_url": "https://api.github.com/users/sayakpaul/following{/other_user}",
"gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions",
"organizations_url": "https://api.github.com/users/sayakpaul/orgs",
"repos_url": "https://api.github.com/users/sayakpaul/repos",
"events_url": "https://api.github.com/users/sayakpaul/events{/privacy}",
"received_events_url": "https://api.github.com/users/sayakpaul/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,663
| 1,663
| 1,663
|
MEMBER
| null |
@sgugger FYI.
PR w.r.t https://github.com/huggingface/transformers/pull/18815
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19159/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19159/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19159",
"html_url": "https://github.com/huggingface/transformers/pull/19159",
"diff_url": "https://github.com/huggingface/transformers/pull/19159.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19159.patch",
"merged_at": 1663858981000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19158
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19158/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19158/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19158/events
|
https://github.com/huggingface/transformers/pull/19158
| 1,382,504,415
|
PR_kwDOCUB6oc4_bqmd
| 19,158
|
[WIP] Trainer supporting evaluation on multiple datasets
|
{
"login": "timbmg",
"id": 11020443,
"node_id": "MDQ6VXNlcjExMDIwNDQz",
"avatar_url": "https://avatars.githubusercontent.com/u/11020443?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/timbmg",
"html_url": "https://github.com/timbmg",
"followers_url": "https://api.github.com/users/timbmg/followers",
"following_url": "https://api.github.com/users/timbmg/following{/other_user}",
"gists_url": "https://api.github.com/users/timbmg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/timbmg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/timbmg/subscriptions",
"organizations_url": "https://api.github.com/users/timbmg/orgs",
"repos_url": "https://api.github.com/users/timbmg/repos",
"events_url": "https://api.github.com/users/timbmg/events{/privacy}",
"received_events_url": "https://api.github.com/users/timbmg/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hey @sgugger, I mostly followed your suggestion in #15857, except instead of having a list of eval_datasets and another training arg, I solved it via passing a dict of eval_datasets. I thought a dict would work better because we also need multiple compute_metric functions. This way it is all lined up and less error-prone. However, let me know if you think otherwise.\r\n\r\nAlso, could you suggest what tests to write for this PR? I am not really sure, since the major change is in `_maybe_log_save_evaluate` and I didn't find a test for that.",
"Thanks for checking it so quickly!\r\n\r\nIn my case, I am training a seq2seq QA model and evaluating it on multiple datasets. However, they have different formats (eg extractive qa like SQuAD, or multiple-choice qa like commonsese QA). Using a seq2seq model for multiple formats has been for example proposed in the [UnifiedQA paper](https://arxiv.org/abs/2005.00700). Having multiple trainers has the limitation that I could only train on a single dataset at a time, but not train on multiple ones at the same time. However, note that if you pass multiple eval_datasets as a dict, but only a single compute_metric callable, the same compute_metrics function will be called on all the eval_datasets. That's what [this if statement](https://github.com/huggingface/transformers/pull/19158/files#diff-ed55888e6665791fe92cc8fc0c499da54f4ace6738551cd9a2591881cda076deR2048) is doing. So the original scenario described in the Issue is also solved.",
"It's too niche of a use-case to allow for support, especially when we have other tools that easily let you more customizable training/evaluation loops like Accelerate.",
"Alright, I have reverted the change. Let me know in case of anything else:)",
"I'm trying to take advantage of the feature to include multiple eval_datasets in the trainer. Maybe I'm misreading the documentation, I've tried several ways to present the eval_dataset, but keep getting a KeyError when I include a DatasetDict / dict with datasets for the eval_dataset parameter. Am I doing something wrong? Do I need to specify the compute_metrics differently? Couldn't find anything on that. \r\n\r\nHere's an example notebook resulting in the ValueError: https://colab.research.google.com/drive/1yLo9iqY4Cz9_h8BtAvcYRCtK5O_xa5jP?usp=sharing",
"> Thanks for your PR! Having the multiple datasets as a dict solves the problem of distinguishing a single dataset that is a list or a list of datasets. So I like this part.\r\n> \r\n> However I didn't see anything in the issue regarding using several `compute_metrics` function. If there is a need for different metrics, it probably means different Trainer should be built as it represents different tasks/problems. That change should be reverted, as the part where `compute_metrics` can be passed along to the `evaluate`/`predict` function.\r\n\r\n@sgugger passing multiple `compute_metrics` functions for evaluation purposes can actually be more general than stated by @timbmg. For example, suppose we are doing multi-task training and we wish to evaluate on the same or held-out tasks as we train. This is common in recent research publications (eg FLAN-T5). Would you accept to support the multiple compute metrics functions? Or would your advice be to not use the trainer altogether and look towards using `accelerate`? I was worried that `accelerate` for training is a big step back towards writing a lot of boilerplate and code that the `Trainer` saves us.",
"I'd recommend using Accelerate instead of the Trainer for this use case.",
"@sgugger Any examples out there of using Accelerate for this? I would also like to evaluate on multiple datasets while training. Thanks!",
"fyi, @dcruiz01 I guess this can be implemented by overriding the `Trainer` class following the initial commit? Or some mixin regarding multiple trainers or even monkey patching it. Another workaround that comes to my mind is identifying the eval dataset with an additional field and splitting them in the compute_metrics function. I'll try to share if any actually works. Both do feel really hacky though..\r\n\r\nThough I agree this is less common for standard fine-tuning, I want to add use cases where we want the model to perform well on multiple tasks in a zero-shot manner. \r\n\r\n- fine-tuning LLMs, in which we want to measure performance on many many different task / metrics / dataset. \r\n- More meta-benchmarks such as MTEB",
"@sieu-n Any luck with the experiments you mentioned?",
"@sgugger @timbmg it looks like the eval_datasets change was pushed for support as a dict but the compute_metrics changes were not pushed. Will this change be made or no?",
"Hey, I think the additional feature to use separated data collators would be useful"
] | 1,663
| 1,706
| 1,663
|
CONTRIBUTOR
| null |
# What does this PR do?
With this PR `Trainer` and `Seq2SeqTrainer` support evaluating on multiple datasets. For this, the `eval_dataset` and `compute_metrics` parameters have been updated. In order to evaluate on multiple datasets, `eval_dataset` should be a dict mapping a dataset name to a Dataset. In `_maybe_log_save_evaluate` we then loop over the dict, calling `evaluate` with each Dataset. The metric prefix is also updated to contain the dataset name. Furthermore, each eval dataset can optionally have its own `compute_metrics` function. For this, `compute_metrics` should be a dict where the keys match with `eval_dataset`.
Fixes #15857
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19158/reactions",
"total_count": 6,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 5,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19158/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19158",
"html_url": "https://github.com/huggingface/transformers/pull/19158",
"diff_url": "https://github.com/huggingface/transformers/pull/19158.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19158.patch",
"merged_at": 1663938893000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19157
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19157/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19157/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19157/events
|
https://github.com/huggingface/transformers/issues/19157
| 1,382,341,474
|
I_kwDOCUB6oc5SZNti
| 19,157
|
Can't use decoder_inputs_embeds argument on MBartForConditionalGeneration
|
{
"login": "hibiki12y",
"id": 28616667,
"node_id": "MDQ6VXNlcjI4NjE2NjY3",
"avatar_url": "https://avatars.githubusercontent.com/u/28616667?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hibiki12y",
"html_url": "https://github.com/hibiki12y",
"followers_url": "https://api.github.com/users/hibiki12y/followers",
"following_url": "https://api.github.com/users/hibiki12y/following{/other_user}",
"gists_url": "https://api.github.com/users/hibiki12y/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hibiki12y/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hibiki12y/subscriptions",
"organizations_url": "https://api.github.com/users/hibiki12y/orgs",
"repos_url": "https://api.github.com/users/hibiki12y/repos",
"events_url": "https://api.github.com/users/hibiki12y/events{/privacy}",
"received_events_url": "https://api.github.com/users/hibiki12y/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Maybe of interest to @ArthurZucker :)",
"Having a look right now! Thanks for finding this 🤗 "
] | 1,663
| 1,668
| 1,668
|
NONE
| null |
### System Info
- `transformers` version: 4.22.0
- Platform: Linux-4.4.0-210-generic-x86_64-with-glibc2.23
- Python version: 3.9.12
- Huggingface_hub version: 0.7.0
- PyTorch version (GPU?): 1.12.0 (True)
- Tensorflow version (GPU?): 2.9.1 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: yes
### Who can help?
@patil-suraj
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
maybe not necessary
### Expected behavior
https://github.com/huggingface/transformers/pull/13800
Looks same problem at MBartForConditionalGeneration.foward
It is written as:
```python
if decoder_input_ids is None:
decoder_input_ids = shift_tokens_right(labels, self.config.pad_token_id)
```
So, looks need edit to:
```python
if decoder_input_ids is None and decoder_inputs_embeds is None:
decoder_input_ids = shift_tokens_right(labels, self.config.pad_token_id)
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19157/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19157/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19156
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19156/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19156/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19156/events
|
https://github.com/huggingface/transformers/pull/19156
| 1,382,310,506
|
PR_kwDOCUB6oc4_bCO1
| 19,156
|
Reduce LR for TF MLM example test
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,663
| 1,663
| 1,663
|
MEMBER
| null |
The TF MLM example test was a little flakey, depending on the exact shuffling order of the dataset. I reduced the LR to 1e-4 and it seems to be consistently around 36 final perplexity now (compared to the test threshold of 42).
Cc @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19156/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19156/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19156",
"html_url": "https://github.com/huggingface/transformers/pull/19156",
"diff_url": "https://github.com/huggingface/transformers/pull/19156.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19156.patch",
"merged_at": 1663851087000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19155
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19155/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19155/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19155/events
|
https://github.com/huggingface/transformers/issues/19155
| 1,382,162,991
|
I_kwDOCUB6oc5SYiIv
| 19,155
|
load_in_8bit=True crashes GPT-J when running model.generate()
|
{
"login": "petertjmills",
"id": 56031935,
"node_id": "MDQ6VXNlcjU2MDMxOTM1",
"avatar_url": "https://avatars.githubusercontent.com/u/56031935?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/petertjmills",
"html_url": "https://github.com/petertjmills",
"followers_url": "https://api.github.com/users/petertjmills/followers",
"following_url": "https://api.github.com/users/petertjmills/following{/other_user}",
"gists_url": "https://api.github.com/users/petertjmills/gists{/gist_id}",
"starred_url": "https://api.github.com/users/petertjmills/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/petertjmills/subscriptions",
"organizations_url": "https://api.github.com/users/petertjmills/orgs",
"repos_url": "https://api.github.com/users/petertjmills/repos",
"events_url": "https://api.github.com/users/petertjmills/events{/privacy}",
"received_events_url": "https://api.github.com/users/petertjmills/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Hey @petertjmills -- can you share with us the error you're seeing or, better yet, a link to a colab with the error?",
"Thank you for the reply! I threw together a quick notebook to send and it worked 🙄\r\n\r\nI've found the source of the issue. In the bitsandbytes readme they specify:\r\n```\r\nHardware requirements:\r\n\r\n LLM.int8(): NVIDIA Turing (RTX 20xx; T4) or Ampere GPU (RTX 30xx; A4-A100); (a GPU from 2018 or older).\r\n 8-bit optimizers and quantization: NVIDIA Maxwell GPU or newer (>=GTX 9XX).\r\n```\r\n\r\nThe first time I got a P100 (Pascal arch, 2016)\r\nThe second time I got a T4, go figure. \r\nMay be worth documenting?",
"> May be worth documenting?\r\n\r\nDefinitely! Would you like to open a PR with a more informative exception? :D"
] | 1,663
| 1,663
| 1,663
|
NONE
| null |
### System Info
Colab Pro, NVIDIA P100
Transformers: 4.22.1
Accelerate: 0.12.0
- `transformers` version: 4.22.1
- Platform: Linux-5.10.133+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.14
- Huggingface_hub version: 0.9.1
- PyTorch version (GPU?): 1.12.1+cu113 (True)
- Tensorflow version (GPU?): 2.8.2 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Y
- Using distributed or parallel set-up in script?: N
### Who can help?
@patil-suraj
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
This works fine, loads the model using 6GB GPU memory, 10GB free
```
from transformers import GPTJForCausalLM, AutoTokenizer
import torch
model = GPTJForCausalLM.from_pretrained(
"EleutherAI/gpt-j-6B",
revision="float16",
torch_dtype=torch.float16,
load_in_8bit=True,
device_map='auto',
low_cpu_mem_usage=True
).to("cuda")
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B")
```
However when I run:
```
prompt = "I "
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
gen_tokens = model.generate(input_ids, do_sample=True, temperature=0.9, max_length=5,)
gen_text = tokenizer.batch_decode(gen_tokens)[0]
print(gen_text)
```
The colab environment crashes with an unknown error, during model.generate()
`max_length=5` and a short prompt were tested to lower mem requirements, but this happens with any length and prompt.
Presumbaly it's not a memory issue as I can just about use inference without load_in_8bit, on the same arch.
### Expected behavior
Model generates exciting tokens, decodes, and prints!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19155/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19155/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19154
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19154/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19154/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19154/events
|
https://github.com/huggingface/transformers/pull/19154
| 1,382,013,635
|
PR_kwDOCUB6oc4_aEBY
| 19,154
|
[Conditional DETR] Add doc tests
|
{
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thank you! It is great!"
] | 1,663
| 1,663
| 1,663
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR improves the docs of Conditional DETR, by adding doc tests + a figure summarizing the paper.
cc @DeppMeng
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19154/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19154/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19154",
"html_url": "https://github.com/huggingface/transformers/pull/19154",
"diff_url": "https://github.com/huggingface/transformers/pull/19154.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19154.patch",
"merged_at": 1663845665000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19153
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19153/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19153/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19153/events
|
https://github.com/huggingface/transformers/pull/19153
| 1,381,944,018
|
PR_kwDOCUB6oc4_Z1v-
| 19,153
|
Fixed typo: "dictionnary" to "dictionary".
|
{
"login": "Fei-Wang",
"id": 11441526,
"node_id": "MDQ6VXNlcjExNDQxNTI2",
"avatar_url": "https://avatars.githubusercontent.com/u/11441526?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Fei-Wang",
"html_url": "https://github.com/Fei-Wang",
"followers_url": "https://api.github.com/users/Fei-Wang/followers",
"following_url": "https://api.github.com/users/Fei-Wang/following{/other_user}",
"gists_url": "https://api.github.com/users/Fei-Wang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Fei-Wang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Fei-Wang/subscriptions",
"organizations_url": "https://api.github.com/users/Fei-Wang/orgs",
"repos_url": "https://api.github.com/users/Fei-Wang/repos",
"events_url": "https://api.github.com/users/Fei-Wang/events{/privacy}",
"received_events_url": "https://api.github.com/users/Fei-Wang/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,663
| 1,664
| 1,664
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19153/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19153/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19153",
"html_url": "https://github.com/huggingface/transformers/pull/19153",
"diff_url": "https://github.com/huggingface/transformers/pull/19153.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19153.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/19152
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19152/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19152/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19152/events
|
https://github.com/huggingface/transformers/pull/19152
| 1,381,818,886
|
PR_kwDOCUB6oc4_Zbq5
| 19,152
|
[TensorFlow] Adding LeViT
|
{
"login": "ariG23498",
"id": 36856589,
"node_id": "MDQ6VXNlcjM2ODU2NTg5",
"avatar_url": "https://avatars.githubusercontent.com/u/36856589?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ariG23498",
"html_url": "https://github.com/ariG23498",
"followers_url": "https://api.github.com/users/ariG23498/followers",
"following_url": "https://api.github.com/users/ariG23498/following{/other_user}",
"gists_url": "https://api.github.com/users/ariG23498/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ariG23498/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ariG23498/subscriptions",
"organizations_url": "https://api.github.com/users/ariG23498/orgs",
"repos_url": "https://api.github.com/users/ariG23498/repos",
"events_url": "https://api.github.com/users/ariG23498/events{/privacy}",
"received_events_url": "https://api.github.com/users/ariG23498/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19152). All of your documentation changes will be reflected on that endpoint.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Still working on it.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Still working.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Working!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,663
| 1,678
| 1,678
|
CONTRIBUTOR
| null |
# Adding TensorFlow version of LeViT
This PR adds the TensorFlow version of [LeViT](https://arxiv.org/abs/2104.01136).
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. Issue linked: #19123
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19152/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19152/timeline
| null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19152",
"html_url": "https://github.com/huggingface/transformers/pull/19152",
"diff_url": "https://github.com/huggingface/transformers/pull/19152.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19152.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/19151
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19151/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19151/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19151/events
|
https://github.com/huggingface/transformers/pull/19151
| 1,381,757,270
|
PR_kwDOCUB6oc4_ZOp4
| 19,151
|
update perf_train_cpu_many doc
|
{
"login": "sywangyi",
"id": 36058628,
"node_id": "MDQ6VXNlcjM2MDU4NjI4",
"avatar_url": "https://avatars.githubusercontent.com/u/36058628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sywangyi",
"html_url": "https://github.com/sywangyi",
"followers_url": "https://api.github.com/users/sywangyi/followers",
"following_url": "https://api.github.com/users/sywangyi/following{/other_user}",
"gists_url": "https://api.github.com/users/sywangyi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sywangyi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sywangyi/subscriptions",
"organizations_url": "https://api.github.com/users/sywangyi/orgs",
"repos_url": "https://api.github.com/users/sywangyi/repos",
"events_url": "https://api.github.com/users/sywangyi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sywangyi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,663
| 1,663
| 1,663
|
CONTRIBUTOR
| null |
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Documentation: @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19151/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19151/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19151",
"html_url": "https://github.com/huggingface/transformers/pull/19151",
"diff_url": "https://github.com/huggingface/transformers/pull/19151.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19151.patch",
"merged_at": 1663852815000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19150
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19150/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19150/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19150/events
|
https://github.com/huggingface/transformers/pull/19150
| 1,381,728,665
|
PR_kwDOCUB6oc4_ZI7-
| 19,150
|
Fixed type hint for pipelines/check_task
|
{
"login": "Fei-Wang",
"id": 11441526,
"node_id": "MDQ6VXNlcjExNDQxNTI2",
"avatar_url": "https://avatars.githubusercontent.com/u/11441526?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Fei-Wang",
"html_url": "https://github.com/Fei-Wang",
"followers_url": "https://api.github.com/users/Fei-Wang/followers",
"following_url": "https://api.github.com/users/Fei-Wang/following{/other_user}",
"gists_url": "https://api.github.com/users/Fei-Wang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Fei-Wang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Fei-Wang/subscriptions",
"organizations_url": "https://api.github.com/users/Fei-Wang/orgs",
"repos_url": "https://api.github.com/users/Fei-Wang/repos",
"events_url": "https://api.github.com/users/Fei-Wang/events{/privacy}",
"received_events_url": "https://api.github.com/users/Fei-Wang/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,663
| 1,663
| 1,663
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19150/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19150/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19150",
"html_url": "https://github.com/huggingface/transformers/pull/19150",
"diff_url": "https://github.com/huggingface/transformers/pull/19150.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19150.patch",
"merged_at": 1663958120000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19149
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19149/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19149/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19149/events
|
https://github.com/huggingface/transformers/pull/19149
| 1,381,466,534
|
PR_kwDOCUB6oc4_YQ0g
| 19,149
|
Fix `m2m_100.mdx` doc example missing `labels`
|
{
"login": "Mustapha-AJEGHRIR",
"id": 66799406,
"node_id": "MDQ6VXNlcjY2Nzk5NDA2",
"avatar_url": "https://avatars.githubusercontent.com/u/66799406?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Mustapha-AJEGHRIR",
"html_url": "https://github.com/Mustapha-AJEGHRIR",
"followers_url": "https://api.github.com/users/Mustapha-AJEGHRIR/followers",
"following_url": "https://api.github.com/users/Mustapha-AJEGHRIR/following{/other_user}",
"gists_url": "https://api.github.com/users/Mustapha-AJEGHRIR/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Mustapha-AJEGHRIR/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mustapha-AJEGHRIR/subscriptions",
"organizations_url": "https://api.github.com/users/Mustapha-AJEGHRIR/orgs",
"repos_url": "https://api.github.com/users/Mustapha-AJEGHRIR/repos",
"events_url": "https://api.github.com/users/Mustapha-AJEGHRIR/events{/privacy}",
"received_events_url": "https://api.github.com/users/Mustapha-AJEGHRIR/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,663
| 1,664
| 1,664
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
The `labels` variable is not defined, the `model_inputs` already contain this information :
```python
model_inputs.keys() # dict_keys(['input_ids', 'attention_mask', 'labels'])
```
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19149/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19149/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19149",
"html_url": "https://github.com/huggingface/transformers/pull/19149",
"diff_url": "https://github.com/huggingface/transformers/pull/19149.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19149.patch",
"merged_at": 1664450879000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19148
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19148/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19148/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19148/events
|
https://github.com/huggingface/transformers/issues/19148
| 1,381,445,021
|
I_kwDOCUB6oc5SVy2d
| 19,148
|
Luke Finetuning Error
|
{
"login": "luffycodes",
"id": 22951144,
"node_id": "MDQ6VXNlcjIyOTUxMTQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/22951144?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/luffycodes",
"html_url": "https://github.com/luffycodes",
"followers_url": "https://api.github.com/users/luffycodes/followers",
"following_url": "https://api.github.com/users/luffycodes/following{/other_user}",
"gists_url": "https://api.github.com/users/luffycodes/gists{/gist_id}",
"starred_url": "https://api.github.com/users/luffycodes/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/luffycodes/subscriptions",
"organizations_url": "https://api.github.com/users/luffycodes/orgs",
"repos_url": "https://api.github.com/users/luffycodes/repos",
"events_url": "https://api.github.com/users/luffycodes/events{/privacy}",
"received_events_url": "https://api.github.com/users/luffycodes/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hi!\r\nI also experienced this error. @luffycodes could you eventually make it work?"
] | 1,663
| 1,699
| 1,667
|
NONE
| null |
### System Info
- `transformers` version: 4.21.2
- Platform: Linux-4.15.0-192-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.13
- Huggingface_hub version: 0.9.1
- PyTorch version (GPU?): 1.7.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@sgugger @jplu
Hi, I was running the finetuning script for LUKE: [Link](https://github.com/huggingface/transformers/blob/main/examples/research_projects/luke/run_luke_ner_no_trainer.py).
I ran into this weird issue:
forward() got an unexpected keyword argument 'ner_tags'
Also, any idea as to what's the f1 score for ner task using this script?
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
`CUDA_VISIBLE_DEVICES=0 python run_luke_ner_no_trainer.py --model_name_or_path studio-ousia/luke-base --dataset_name conll2003 --task_name ner --max_length 128 --per_device_train_batch_size 32 --learning_rate 2e-5 --num_train_epochs 3 --output_dir NER`
### Expected behavior
Starts the training process.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19148/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19148/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19147
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19147/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19147/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19147/events
|
https://github.com/huggingface/transformers/issues/19147
| 1,381,433,864
|
I_kwDOCUB6oc5SVwII
| 19,147
|
Is BertWordPieceTokenizer behaviour deterministic on the same data?
|
{
"login": "ZurabDz",
"id": 34181252,
"node_id": "MDQ6VXNlcjM0MTgxMjUy",
"avatar_url": "https://avatars.githubusercontent.com/u/34181252?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZurabDz",
"html_url": "https://github.com/ZurabDz",
"followers_url": "https://api.github.com/users/ZurabDz/followers",
"following_url": "https://api.github.com/users/ZurabDz/following{/other_user}",
"gists_url": "https://api.github.com/users/ZurabDz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZurabDz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZurabDz/subscriptions",
"organizations_url": "https://api.github.com/users/ZurabDz/orgs",
"repos_url": "https://api.github.com/users/ZurabDz/repos",
"events_url": "https://api.github.com/users/ZurabDz/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZurabDz/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,663
| 1,667
| 1,667
|
NONE
| null |
So, I accidentally lost BertWordPieceTokenizer checkpoint, what will happen If I retrain it from the same data? Will the result be the same?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19147/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19147/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19334
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19334/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19334/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19334/events
|
https://github.com/huggingface/transformers/issues/19334
| 1,397,430,578
|
I_kwDOCUB6oc5TSxky
| 19,334
|
`truncation='do_not_truncate'` is not working equivalently to `truncation=False`
|
{
"login": "urialon",
"id": 15002544,
"node_id": "MDQ6VXNlcjE1MDAyNTQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/15002544?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/urialon",
"html_url": "https://github.com/urialon",
"followers_url": "https://api.github.com/users/urialon/followers",
"following_url": "https://api.github.com/users/urialon/following{/other_user}",
"gists_url": "https://api.github.com/users/urialon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/urialon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/urialon/subscriptions",
"organizations_url": "https://api.github.com/users/urialon/orgs",
"repos_url": "https://api.github.com/users/urialon/repos",
"events_url": "https://api.github.com/users/urialon/events{/privacy}",
"received_events_url": "https://api.github.com/users/urialon/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi,\r\nThis issue belongs in `transformers` afaik. All the logic should be handled there as `truncation=False` does not mean anything for this library (IIRC).\r\n\r\nIf you can reproduce the bug just using `tokenizers` and not `transformers` then it's probably that the bug in this library.",
"Thanks, closing and re-creating in the `transformers` format"
] | 1,663
| 1,666
| 1,666
|
CONTRIBUTOR
| null |
Hi,
`truncation='do_not_truncate'` is not working equivalently to `truncation=False`.
When using `truncation=False` and providing `max_length`, it defaults to `'longest_first'` truncation strategy.
Whether this default behavior is natural or not, isn't `False` supposed to be identical to `'do_not_truncate'`?
This leads to a situation when the user explicitly specifies `truncation=False` but the text **is tokenized**.
This manual: https://huggingface.co/docs/transformers/pad_truncation and this doc https://huggingface.co/docs/transformers/main_classes/tokenizer
say that:
>`False` or `'do_not_truncate'`: no truncation is applied. This is the default behavior.
Which means that they are supposed to be equivalent (regardless of what they do, they should behave the same).
I suggest that `False` should just mean "no truncation", regardless of `max_length` was supplied or not.
Here is a short example:
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("facebook/bart-base")
sent = 'The quick brown fox jumps over the lazy dog'
len(tokenizer.encode(sent, max_length=5, truncation='do_not_truncate'))
```
prints: `11`
```python
len(tokenizer.encode(sent, max_length=5, truncation=False))
```
prints: `5`
Thanks,
Uri
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19334/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19334/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19146
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19146/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19146/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19146/events
|
https://github.com/huggingface/transformers/pull/19146
| 1,381,353,833
|
PR_kwDOCUB6oc4_X4jq
| 19,146
|
Add some tests for check_dummies
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19146). All of your documentation changes will be reflected on that endpoint."
] | 1,663
| 1,663
| 1,663
|
COLLABORATOR
| null |
# What does this PR do?
This PR adds a new subfolder in the `tests` folder for the tests of the quality scripts used in the CI (so not inside the Transformers lib). As seen recently with the `check_dummies` script, when people apply too many modifications to our repo and update those scripts, there might be some breaking changes that slip through the cracks. Hopefully such new tests will help catch the failures early.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19146/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19146/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19146",
"html_url": "https://github.com/huggingface/transformers/pull/19146",
"diff_url": "https://github.com/huggingface/transformers/pull/19146.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19146.patch",
"merged_at": 1663786450000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19145
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19145/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19145/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19145/events
|
https://github.com/huggingface/transformers/pull/19145
| 1,381,215,835
|
PR_kwDOCUB6oc4_XbCs
| 19,145
|
Fixed typo in generation_utils.py
|
{
"login": "nbalepur",
"id": 55101514,
"node_id": "MDQ6VXNlcjU1MTAxNTE0",
"avatar_url": "https://avatars.githubusercontent.com/u/55101514?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nbalepur",
"html_url": "https://github.com/nbalepur",
"followers_url": "https://api.github.com/users/nbalepur/followers",
"following_url": "https://api.github.com/users/nbalepur/following{/other_user}",
"gists_url": "https://api.github.com/users/nbalepur/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nbalepur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nbalepur/subscriptions",
"organizations_url": "https://api.github.com/users/nbalepur/orgs",
"repos_url": "https://api.github.com/users/nbalepur/repos",
"events_url": "https://api.github.com/users/nbalepur/events{/privacy}",
"received_events_url": "https://api.github.com/users/nbalepur/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,663
| 1,663
| 1,663
|
CONTRIBUTOR
| null |
Changed "unfeasable" to "unfeasible"
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19145/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19145/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19145",
"html_url": "https://github.com/huggingface/transformers/pull/19145",
"diff_url": "https://github.com/huggingface/transformers/pull/19145.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19145.patch",
"merged_at": 1663786792000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19144
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19144/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19144/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19144/events
|
https://github.com/huggingface/transformers/pull/19144
| 1,381,133,300
|
PR_kwDOCUB6oc4_XJd5
| 19,144
|
Fix dummy creation for multi-frameworks objects
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19144). All of your documentation changes will be reflected on that endpoint."
] | 1,663
| 1,663
| 1,663
|
COLLABORATOR
| null |
# What does this PR do?
Follow-up from #17304, this finishes fixing the dummy creation script when there is more than one framework on which the object depends.
The problem is that it does not recognize the lines in the init like:
```
if not (is_sentencepiece_available() and is_tokenizers_available()):
```
To check the change, after the PR the following passes:
```py
import sys
# Adapt to where the repo is
sys.path.append("../git/transformers/utils")
from check_dummies import find_backend
assert find_backend(" if not is_tokenizers_available():") == "tokenizers"
assert find_backend(" if not (is_sentencepiece_available() and is_tokenizers_available()):") == "sentencepiece_and_tokenizers"
```
Before this PR only the first test passes.
I will work on adding unit tests such as the one above for all of our quality scripts.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19144/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19144/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19144",
"html_url": "https://github.com/huggingface/transformers/pull/19144",
"diff_url": "https://github.com/huggingface/transformers/pull/19144.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19144.patch",
"merged_at": 1663774905000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19143
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19143/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19143/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19143/events
|
https://github.com/huggingface/transformers/pull/19143
| 1,380,994,856
|
PR_kwDOCUB6oc4_Wscs
| 19,143
|
fix a bug in beam_search (moving log_softmax after logits_processor)
|
{
"login": "nonstopfor",
"id": 47969037,
"node_id": "MDQ6VXNlcjQ3OTY5MDM3",
"avatar_url": "https://avatars.githubusercontent.com/u/47969037?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nonstopfor",
"html_url": "https://github.com/nonstopfor",
"followers_url": "https://api.github.com/users/nonstopfor/followers",
"following_url": "https://api.github.com/users/nonstopfor/following{/other_user}",
"gists_url": "https://api.github.com/users/nonstopfor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nonstopfor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nonstopfor/subscriptions",
"organizations_url": "https://api.github.com/users/nonstopfor/orgs",
"repos_url": "https://api.github.com/users/nonstopfor/repos",
"events_url": "https://api.github.com/users/nonstopfor/events{/privacy}",
"received_events_url": "https://api.github.com/users/nonstopfor/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Maybe of interest to @gante ?",
"Hey @nonstopfor 👋 \r\n\r\nYes, your point is correct -- the normalization should happen after the logits processors. We have a flag to do it -- if you call `generate` with `renormalize_logits=True`, the last logit processor renormalizes your logits :)\r\n\r\nThis means this PR should not be needed -- let me know if it doesn't address your problem.",
"Ok. Thanks."
] | 1,663
| 1,666
| 1,666
|
NONE
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fix a bug in beam_search function. If we use `log_softmax` before `logits_processor`, then the processed logits may not represent logP correctly because some logits may be set to `-inf` while other logits remain unchanged (i.e., `sum(exp(logits))!=1)`. This can significantly influence the generation results in my cases.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19143/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19143/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19143",
"html_url": "https://github.com/huggingface/transformers/pull/19143",
"diff_url": "https://github.com/huggingface/transformers/pull/19143.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19143.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/19142
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19142/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19142/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19142/events
|
https://github.com/huggingface/transformers/issues/19142
| 1,380,989,321
|
I_kwDOCUB6oc5SUDmJ
| 19,142
|
use `@unittest.skipIf` decorators inside tokenizer's tests instead of `if ...: return`
|
{
"login": "SaulLu",
"id": 55560583,
"node_id": "MDQ6VXNlcjU1NTYwNTgz",
"avatar_url": "https://avatars.githubusercontent.com/u/55560583?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SaulLu",
"html_url": "https://github.com/SaulLu",
"followers_url": "https://api.github.com/users/SaulLu/followers",
"following_url": "https://api.github.com/users/SaulLu/following{/other_user}",
"gists_url": "https://api.github.com/users/SaulLu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SaulLu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SaulLu/subscriptions",
"organizations_url": "https://api.github.com/users/SaulLu/orgs",
"repos_url": "https://api.github.com/users/SaulLu/repos",
"events_url": "https://api.github.com/users/SaulLu/events{/privacy}",
"received_events_url": "https://api.github.com/users/SaulLu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Let me ping you @ydshieh as it seems to me that you have a great knowledge of testing and your opinion on it could be really appreciated! (and cc @LysandreJik and @sgugger for visibility)",
"Would `test_sentencepiece` be an attribute of the common tester? Not sure where it comes from in your code sample ;-)",
"Oh yes you're right, I should have mentioned that! \r\n\r\nIndeed `test_sentencepiece` is an attribute of the test class (if you want to see how this decorator works on a very simple test class, you can see an example [here](https://www.tutorialspoint.com/unittest_framework/unittest_framework_skip_test.htm)). \r\n\r\nIn `TokenizerTesterMixin` it is set to `False` by default but is sometimes overridden to `True` by the test class of a particular tokenizer.",
"I love the usage of `@unittest.skipIf`!\r\n\r\nHowever, I see a problem with `@unittest.skipIf(not test_sentencepiece, \"Not testing sentencepiece\")`, as `test_sentencepiece` wouldn't be defined at the time this decoration (and the condition) is evaluated.\r\n\r\nWe will need to think of a solution :-)",
"FYI, the same exists in `transformers`, for example\r\n\r\n```\r\n def test_tie_model_weights(self):\r\n if not self.test_torchscript:\r\n return\r\n```",
"> I love the usage of @unittest.skipIf!\r\n\r\nYeah :raised_hands:!\r\n \r\n> However, I see a problem with @unittest.skipIf(not test_sentencepiece, \"Not testing sentencepiece\"), as test_sentencepiece wouldn't be defined at the time this decoration (and the condition) is evaluated.\r\n\r\nFor the case of `test_sentencepiece` I think they will be defined before as they are defined here:\r\n\r\nhttps://github.com/huggingface/transformers/blob/114295c010dd9c94d48add7a0f091ba6ebdf482b/tests/test_tokenization_common.py#L142\r\n\r\nhttps://github.com/huggingface/transformers/blob/19420fd99e1f08a052a1d0d267f3496002d03618/tests/models/xlm_prophetnet/test_tokenization_xlm_prophetnet.py#L33",
"You are right, @SaulLu! I didn't know that the decoration will be evaluated with the class attributes :-) You know better than me 😆 So It makes the change much easier!\r\n\r\n\r\nThis works!\r\n```python\r\nimport unittest\r\n\r\n\r\nclass DummyTest(unittest.TestCase):\r\n test_dummy = False\r\n\r\n @unittest.skipIf(not test_dummy, \"not test dummy\")\r\n def test_me(self):\r\n assert 1 == 2\r\n```",
"Well, I am quite cautious, and found we have something more to deal with. The following will test both method. It looks like it only uses the value in `DummyTestMixin`, not the one overridden in the subclasses.\r\n\r\n(it will work if we override `test_me` methods too with the skipIf) \r\n\r\n```\r\nimport unittest\r\n\r\n\r\nclass DummyTestMixin:\r\n test_dummy = True\r\n\r\n @unittest.skipIf(not test_dummy, \"not test dummy\")\r\n def test_me(self):\r\n assert 1 == 2\r\n\r\n\r\nclass DummyTest(DummyTestMixin, unittest.TestCase):\r\n test_dummy = True\r\n\r\n\r\nclass DummyNotTest(DummyTestMixin, unittest.TestCase):\r\n test_dummy = False\r\n\r\n```",
"Oh no! It would have been so great to have! But cool that you saw it quickly @ydshieh ",
"We can do some research though. I can find some time to see if there is any approach.",
"From a brief look, it looks like the solution adopted in the model tester (testing the class variable at the beginning of the test and exiting early) is the easiest one.",
"I have one last suggestion: it seems to me that it is possible to skip a test from inside the test using `self.skipTest` ([doc](https://docs.python.org/3/library/unittest.html?highlight=skiptest#unittest.TestCase.skipTest) - it was introduced in python 3.1).\r\n\r\nOn the toy example it seems to work well:\r\n```python\r\nimport unittest\r\n\r\n\r\nclass DummyTestMixin:\r\n test_dummy = True\r\n a = 2\r\n\r\n def test_me(self):\r\n if not self.test_dummy:\r\n self.skipTest(\"not test dummy\")\r\n assert self.a == 2\r\n\r\n\r\nclass DummyTestPass(DummyTestMixin, unittest.TestCase):\r\n test_dummy = True\r\n\r\n\r\nclass DummyNotTestPass(DummyTestMixin, unittest.TestCase):\r\n test_dummy = False\r\n\r\n\r\nclass DummyTestNotPass(DummyTestMixin, unittest.TestCase):\r\n test_dummy = True\r\n a = 1\r\n\r\n\r\nclass DummyNotTestNotPass(DummyTestMixin, unittest.TestCase):\r\n test_dummy = False\r\n a = 1\r\n\r\nif __name__ == \"__main__\":\r\n unittest.main()\r\n```\r\n\r\nWhat do you think about this?",
"This is awesome, @SaulLu ! Thank you :-). I would love this new approach to skip. Leave @sgugger and @LysandreJik for a final confirmation.",
"That works for me!",
"So cool! I'll try to take care of these changes today or tomorrow",
"Oops, it completely slipped my mind. I'll try to do this next week."
] | 1,663
| 1,669
| 1,669
|
CONTRIBUTOR
| null |
### Feature request
Currently in tokenizer testing, many tests coded in `test_tokenization_common.py` are not relevant for all tokenizers. In most cases, when the test is not relevant, it is still run but no verification is done in it. See the snippet below for example:
https://github.com/huggingface/transformers/blob/114295c010dd9c94d48add7a0f091ba6ebdf482b/tests/test_tokenization_common.py#L384-L396
I would like to propose to replace these if tests in the test methods by `@unittest.skipIf` decorators. On the previous example it would give:
```python
@unittest.skipIf(not test_sentencepiece, "Not testing sentencepiece")
def test_subword_regularization_tokenizer(self) -> None:
# Subword regularization is only available for the slow tokenizer.
sp_model_kwargs = {"enable_sampling": True, "alpha": 0.1, "nbest_size": -1}
tokenizer = self.get_tokenizer(sp_model_kwargs=sp_model_kwargs)
self.assertTrue(hasattr(tokenizer, "sp_model_kwargs"))
self.assertIsNotNone(tokenizer.sp_model_kwargs)
self.assertTrue(isinstance(tokenizer.sp_model_kwargs, dict))
self.assertEqual(tokenizer.sp_model_kwargs, sp_model_kwargs)
self.check_subword_sampling(tokenizer)
```
### Motivation
The problem with the current method is that we don't have a view on the number of tests actually performed on each type of tokenizers. If errors are made in the configuration of the test classes, we can have a green check for all the tests but in reality nothing has been checked
### Your contribution
If you ever find it relevant, I can make the changes or let someone else who would be available to do it before me.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19142/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19142/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19141
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19141/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19141/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19141/events
|
https://github.com/huggingface/transformers/pull/19141
| 1,380,939,098
|
PR_kwDOCUB6oc4_Wg3X
| 19,141
|
Support depencencies from github
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Tests failure are not related to this PR (but to `datasets`, sorry)\r\n\r\n<s>May I merge anyway ?</s>\r\n\r\nEDIT: actually the CI has been fixed on main, let me just update this PR",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,663
| 1,663
| 1,663
|
MEMBER
| null |
Updated the `deps` regex to support dependencies from github.
For example I often want to run the `transformers` CI but with the `main` branch of `datasets` using
```
"datasets @ git+https://github.com/huggingface/datasets@main#egg=datasets",
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19141/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19141/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19141",
"html_url": "https://github.com/huggingface/transformers/pull/19141",
"diff_url": "https://github.com/huggingface/transformers/pull/19141.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19141.patch",
"merged_at": 1663769732000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19140
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19140/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19140/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19140/events
|
https://github.com/huggingface/transformers/pull/19140
| 1,380,841,244
|
PR_kwDOCUB6oc4_WMZp
| 19,140
|
[fix] Add DeformableDetrFeatureExtractor
|
{
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,663
| 1,663
| 1,663
|
CONTRIBUTOR
| null |
# What does this PR do?
As pointed out by @Deppmeng who is adding Conditional DETR in #18948, the postprocessing of Deformable DETR is actually different compared to regular DETR. Namely, a sigmoid activation function is used rather than softmax, and the no-object class is included, whereas DETR discards this class.
Hence, we'll need a new `DeformableDetrFeatureExtractor` which includes this custom postprocessing logic. As only the postprocessing of object detection is different, I'm using `Copied from` statements wherever possible.
To do:
- [x] update `preprocessor_config.json` of all repos on the hub
- [x] use `from_pretrained` in the code snippets for the feature extractor
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19140/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19140/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19140",
"html_url": "https://github.com/huggingface/transformers/pull/19140",
"diff_url": "https://github.com/huggingface/transformers/pull/19140.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19140.patch",
"merged_at": 1663832724000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19139
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19139/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19139/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19139/events
|
https://github.com/huggingface/transformers/pull/19139
| 1,380,794,360
|
PR_kwDOCUB6oc4_WCt6
| 19,139
|
Allowing users to use the latest `tokenizers` release !
|
{
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"You need to run `make style` when changing the setup.",
"_The documentation is not available anymore as the PR was closed or merged._",
"Failure seem unrelated. Yet rebasing on main to remove the failure linked to the Datasets release and re-laucnhing spurious tests would be helpful to ease everyone's mind :-)",
"@sgugger Should I wait for a second core maintainer's opinion on this ?",
"Nope, you can go ahead and merge :-) ",
"I couldn't immediately find the release process for this repository - when will this make it into a release? `tokenizers` (for versions earlier than 0.13.0) had no wheel available for Apple silicon, so I believe until this PR is released we're stuck with source builds for that dependency.",
"The next release of Transformers will be in a month roughly. In the meantime, you can install it from source."
] | 1,663
| 1,664
| 1,663
|
CONTRIBUTOR
| null |
# What does this PR do?
- Allow users to use the most recent `tokenizers` version.
- Should be 100% backward compatible, but there were quite large changes to the actual codebase.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19139/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19139/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19139",
"html_url": "https://github.com/huggingface/transformers/pull/19139",
"diff_url": "https://github.com/huggingface/transformers/pull/19139.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19139.patch",
"merged_at": 1663775165000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19138
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19138/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19138/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19138/events
|
https://github.com/huggingface/transformers/issues/19138
| 1,380,735,224
|
I_kwDOCUB6oc5STFj4
| 19,138
|
`is_torch_tpu_available() got an unexpected keyword argument 'check_device'`
|
{
"login": "Sakurakdx",
"id": 48399040,
"node_id": "MDQ6VXNlcjQ4Mzk5MDQw",
"avatar_url": "https://avatars.githubusercontent.com/u/48399040?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Sakurakdx",
"html_url": "https://github.com/Sakurakdx",
"followers_url": "https://api.github.com/users/Sakurakdx/followers",
"following_url": "https://api.github.com/users/Sakurakdx/following{/other_user}",
"gists_url": "https://api.github.com/users/Sakurakdx/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Sakurakdx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Sakurakdx/subscriptions",
"organizations_url": "https://api.github.com/users/Sakurakdx/orgs",
"repos_url": "https://api.github.com/users/Sakurakdx/repos",
"events_url": "https://api.github.com/users/Sakurakdx/events{/privacy}",
"received_events_url": "https://api.github.com/users/Sakurakdx/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"You are trying to run the examples of the main branch along with an older version of Transformers. Either upgrade your Transformers or use the [examples matching v4.18.0](https://github.com/huggingface/transformers/tree/v4.18.0/examples).",
"Thanks for your response."
] | 1,663
| 1,663
| 1,663
|
NONE
| null |
### System Info
transformers==4.18.0
pytorch==1.12.0
pytorch-lightning==1.6.0
### Who can help?
@patrickvonplaten @sgugger @SaulLu
When I run `examples/pytorch/question-answering/run_qa.py`, it reports a bug that `is_torch_tpu_available() got an unexpected keyword argument 'check_device'`. I don't know why this problem happens. Could you help me solve this problem?

### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
python run_qa.py \
--model_name_or_path xlm-roberta-base \
--dataset_name squad \
--do_train \
--do_eval \
--per_device_train_batch_size 12 \
--learning_rate 3e-5 \
--num_train_epochs 2 \
--max_seq_length 384 \
--version_2_with_negative \
--doc_stride 128 \
--output_dir /tmp/debug_squad/
### Expected behavior
Could you help me solve this problem?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19138/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19138/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19137
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19137/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19137/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19137/events
|
https://github.com/huggingface/transformers/issues/19137
| 1,380,651,831
|
I_kwDOCUB6oc5SSxM3
| 19,137
|
how to load multiple text files in LineByLineTextDataset ?
|
{
"login": "mv96",
"id": 14794584,
"node_id": "MDQ6VXNlcjE0Nzk0NTg0",
"avatar_url": "https://avatars.githubusercontent.com/u/14794584?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mv96",
"html_url": "https://github.com/mv96",
"followers_url": "https://api.github.com/users/mv96/followers",
"following_url": "https://api.github.com/users/mv96/following{/other_user}",
"gists_url": "https://api.github.com/users/mv96/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mv96/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mv96/subscriptions",
"organizations_url": "https://api.github.com/users/mv96/orgs",
"repos_url": "https://api.github.com/users/mv96/repos",
"events_url": "https://api.github.com/users/mv96/events{/privacy}",
"received_events_url": "https://api.github.com/users/mv96/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi everyone, \r\n\r\nI am a bit new to hugging face environment , I was trying to pretrain a model from scratch following taking some inspirations from this post\r\n\r\n[tutorial link](https://ireneli.eu/2021/03/28/deep-learning-19-training-mlm-on-any-pre-trained-bert-models/)\r\n\r\n Question : can I pass all the text files to construct the dataset ?\r\n\r\n`dataset = LineByLineTextDataset(\r\n tokenizer=tokenizer,\r\n file_path='MyData.tsv',\r\n block_size=128\r\n)`\r\n\r\nand also If someone can explain me what the block size means ? \r\n\r\ndoes it mean it will load 128 lines at a time to construct a batch of dataset ?",
"[LineByLineTextDataset](https://github.com/huggingface/transformers/blob/main/src/transformers/data/datasets/language_modeling.py#L115) seems like do not provide such functionality, I think you can either combing those tsvs yourself or you could Extend a class similar to this:\r\n\r\n```python3\r\nfrom typing import Union, List\r\nimport os\r\nfrom datasets import Dataset\r\n\r\nclass LineByLineTextDataset(Dataset):\r\n \"\"\"\r\n This will be superseded by a framework-agnostic approach soon.\r\n \"\"\"\r\n\r\n def __init__(self, tokenizer: PreTrainedTokenizer, file_paths: Union[str, List[str]], block_size: int):\r\n warnings.warn(\r\n DEPRECATION_WARNING.format(\r\n \"https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_mlm.py\"\r\n ),\r\n FutureWarning,\r\n )\r\n if isinstance(file_paths, list):\r\n for file in file_paths:\r\n if os.path.isfile(file) is False:\r\n raise ValueError(f\"Input file path {file} not found\")\r\n else:\r\n if os.path.isfile(file_paths) is False:\r\n raise ValueError(f\"Input file path {file_paths} not found\")\r\n # Here, we do not cache the features, operating under the assumption\r\n # that we will soon use fast multithreaded tokenizers from the\r\n # `tokenizers` repo everywhere =)\r\n logger.info(f\"Creating features from dataset file at {file_paths}\")\r\n\r\n all_lines = []\r\n for file in file_paths:\r\n with open(file, encoding=\"utf-8\") as f:\r\n lines = [line for line in f.read().splitlines() if (len(line) > 0 and not line.isspace())]\r\n\r\n all_lines.extend(lines)\r\n\r\n batch_encoding = tokenizer(all_lines, add_special_tokens=True, truncation=True, max_length=block_size)\r\n self.examples = batch_encoding[\"input_ids\"]\r\n self.examples = [{\"input_ids\": torch.tensor(e, dtype=torch.long)} for e in self.examples]\r\n\r\n def __len__(self):\r\n return len(self.examples)\r\n\r\n def __getitem__(self, i) -> Dict[str, torch.tensor]:\r\n return self.examples[i]\r\n```\r\n\r\n`batch_encoding = tokenizer(lines, add_special_tokens=True, truncation=True, max_length=block_size)` here it seems like block_size controls how much token should be present after encoding at max, so if line is 'too long' it will just truncate it",
"Hi @ZurabDz,\r\n\r\nThanks, for the help.\r\n\r\nSo I will combine to get the single tsv file.\r\n",
"I think you should close an issue if it's resolved."
] | 1,663
| 1,663
| 1,663
|
NONE
| null | null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19137/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19137/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19136
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19136/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19136/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19136/events
|
https://github.com/huggingface/transformers/issues/19136
| 1,380,595,558
|
I_kwDOCUB6oc5SSjdm
| 19,136
|
`if not something` is ambiguous
|
{
"login": "lsz05",
"id": 13938347,
"node_id": "MDQ6VXNlcjEzOTM4MzQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/13938347?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lsz05",
"html_url": "https://github.com/lsz05",
"followers_url": "https://api.github.com/users/lsz05/followers",
"following_url": "https://api.github.com/users/lsz05/following{/other_user}",
"gists_url": "https://api.github.com/users/lsz05/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lsz05/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lsz05/subscriptions",
"organizations_url": "https://api.github.com/users/lsz05/orgs",
"repos_url": "https://api.github.com/users/lsz05/repos",
"events_url": "https://api.github.com/users/lsz05/events{/privacy}",
"received_events_url": "https://api.github.com/users/lsz05/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Thank you for the issue! It's good to know the problems encountered. :hugs: \r\n\r\nOn the other hand, I'm not sure the case we want to catch is if `required_input` is equal to None (rather an empty list and maybe something else..). I don't have a typical case in mind but that would be the first thing we would need to find to solve your problem.\r\n\r\nWould you happen to know what input was given to the pad method when you got this error?",
"I examined what happened here with the following method:\r\n\r\nI add two lines \r\n```python\r\nprint(f\"{required_input=}\")\r\nprint(f\"{self.model_input_names[0]=}\")\r\n```\r\nbetween line 2095 and 2097 to see the values of these two variables, and get the following output\r\nhttps://github.com/huggingface/transformers/blob/2c8b508ccabea6638aa463a137852ff3b64be036/src/transformers/tokenization_utils_base.py#L2905-L2910\r\n\r\nOUTPUT:\r\n```python\r\nrequired_input=DeviceArray([[[ 1, 3091, 459, ..., 3, 3, 3],\r\n [ 1, 1175, 60, ..., 3, 3, 3],\r\n [ 1, 1191, 90, ..., 3, 3, 3],\r\n ...,\r\n [ 1, 1433, 60, ..., 3, 3, 3],\r\n [ 1, 511, 292, ..., 3, 3, 3],\r\n [ 1, 442, 318, ..., 3, 3, 3]]], dtype=int32)\r\nself.model_input_names[0]='input_ids'\r\n```\r\n\r\nTherefore, I believe `required_input` is typically a list or a tensor. If we use `if not required_input`, sometimes it's possible to be processed as `bool` type.\r\n\r\nAs you said, if we want to catch if `required_input` is an empty list, why don't we consider judging from its shape?\r\n\r\nIn the following verifications I would like to show that this line (2907) might not be exactly working as how we want.\r\n\r\nlist\r\n```python\r\nIn [1]: empty = [[], []] # tokenizer([\"\"] * 2, add_special_tokens=False)\r\n\r\nIn [2]: not_empty = [[1, 2, 3], [4, 5, 6]]\r\n\r\nIn [3]: not empty, not not_empty\r\nOut[3]: (False, False)\r\n```\r\n\r\nnumpy\r\n```python\r\nIn [4]: import numpy as np\r\n\r\nIn [5]: not np.array(empty)\r\n<ipython-input-5-b0dbaf8aec3d>:1: DeprecationWarning: The truth value of an empty array is ambiguous. Returning False, but in future this will result in an error. Use `array.size > 0` to check that an array is not empty.\r\n not np.array(empty)\r\nOut[5]: True\r\n\r\nIn [6]: not np.array(not_empty)\r\n---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\nInput In [6], in <cell line: 1>()\r\n----> 1 not np.array(not_empty)\r\n\r\nValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()\r\n```\r\n\r\npt\r\n```python\r\nIn [8]: import torch\r\n\r\nIn [12]: not torch.Tensor(empty)\r\n---------------------------------------------------------------------------\r\nRuntimeError Traceback (most recent call last)\r\nInput In [12], in <cell line: 1>()\r\n----> 1 not torch.Tensor(empty)\r\n\r\nRuntimeError: Boolean value of Tensor with no values is ambiguous\r\n\r\nIn [13]: not torch.Tensor(not_empty)\r\n---------------------------------------------------------------------------\r\nRuntimeError Traceback (most recent call last)\r\nInput In [13], in <cell line: 1>()\r\n----> 1 not torch.Tensor(not_empty)\r\n\r\nRuntimeError: Boolean value of Tensor with more than one value is ambiguous\r\n```\r\n\r\njax\r\n```python\r\nIn [14]: import jax.numpy as jnp\r\n\r\nIn [15]: not jnp.array(empty)\r\nOut[15]: True\r\n\r\nIn [16]: not jnp.array(not_empty)\r\n---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\nInput In [16], in <cell line: 1>()\r\n----> 1 not jnp.array(not_empty)\r\n\r\nFile /usr/local/lib/python3.8/functools.py:399, in partialmethod._make_unbound_method.<locals>._method(cls_or_self, *args, **keywords)\r\n 397 def _method(cls_or_self, /, *args, **keywords):\r\n 398 keywords = {**self.keywords, **keywords}\r\n--> 399 return self.func(cls_or_self, *self.args, *args, **keywords)\r\n\r\nFile ~/env/xxx/lib/python3.8/site-packages/jax/_src/device_array.py:43, in _forward_method(attrname, self, fun, *args)\r\n 42 def _forward_method(attrname, self, fun, *args):\r\n---> 43 return fun(getattr(self, attrname), *args)\r\n\r\nValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()\r\n```\r\n\r\nAs a conclusion, if the goal is to determine whether `required_input` is a list/array/tensor containing nothing, there is a risk causing error. In this case, I suggest to judge from the shape if it is an array/tensor (if it's a list, it's fine, so it might be necessary to get its type first).\r\n\r\nIf the goal is just to determine whether it's `None` (as `tokenizer(something)` seems not to return `None` in most cases, I think this line doesn't mean to do this), would `if something is not None` be better?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Sorry this slipped through the cracks. Very much in agreeance here @lsz05 and this is one of the reason we usually avoid relying on Python magic bool conversion but test for explicit values. Would you mind making a PR with a fix?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"I almost forgot it.\r\nI'll do something asap.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,663
| 1,670
| 1,670
|
NONE
| null |
### System Info
- `transformers` version: 4.22.0
- Platform: Linux-4.19.157-1.20201118.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.8.3
- Huggingface_hub version: 0.9.1
- PyTorch version (GPU?): 1.10.2+cu111 (True)
- Tensorflow version (GPU?): 2.10.0 (True)
- Flax version (CPU?/GPU?/TPU?): 0.6.0 (gpu)
- Jax version: 0.3.17
- JaxLib version: 0.3.15
- Using GPU in script?: YES
- Using distributed or parallel set-up in script?: NO
GPU device: A100 x 1
### Who can help?
@SaulLu
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I run the following code
https://github.com/huggingface/transformers/blob/main/examples/flax/language-modeling/run_mlm_flax.py#L296
and an error occurs here:
https://github.com/huggingface/transformers/blob/2c8b508ccabea6638aa463a137852ff3b64be036/src/transformers/tokenization_utils_base.py#L2907
My `required_input` is a tensor (in jax), sometimes I get error `ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()`.
### Expected behavior
https://github.com/huggingface/transformers/blob/2c8b508ccabea6638aa463a137852ff3b64be036/src/transformers/tokenization_utils_base.py#L2907
I suggest to use `if required_input is None` instead of `if not required_input` to avoid error, as the latter one is ambiguous.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19136/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19136/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19135
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19135/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19135/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19135/events
|
https://github.com/huggingface/transformers/issues/19135
| 1,380,571,388
|
I_kwDOCUB6oc5SSdj8
| 19,135
|
Use metrics that consider the input as well as the (predicted, reference) tuple in the Trainer
|
{
"login": "slvcsl",
"id": 25265140,
"node_id": "MDQ6VXNlcjI1MjY1MTQw",
"avatar_url": "https://avatars.githubusercontent.com/u/25265140?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/slvcsl",
"html_url": "https://github.com/slvcsl",
"followers_url": "https://api.github.com/users/slvcsl/followers",
"following_url": "https://api.github.com/users/slvcsl/following{/other_user}",
"gists_url": "https://api.github.com/users/slvcsl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/slvcsl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/slvcsl/subscriptions",
"organizations_url": "https://api.github.com/users/slvcsl/orgs",
"repos_url": "https://api.github.com/users/slvcsl/repos",
"events_url": "https://api.github.com/users/slvcsl/events{/privacy}",
"received_events_url": "https://api.github.com/users/slvcsl/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"I just saw that the feature is actually already added. \r\n\r\nOne can add the trainer argument --include_inputs_for_metrics\r\nand the compute_metrics will receive the inputs as the third element of the tuple."
] | 1,663
| 1,663
| 1,663
|
NONE
| null |
### Feature request
Allow the **compute_metrics()** in the **Trainer** to take into account the **original input** in addition to the predictions and labels.
### Motivation
It is currently possible to pass a custom compute_metrics() to the Trainer for evaluation. An example is
```
def compute_metrics(eval_preds):
metric = evaluate.load("glue", "mrpc")
logits, labels = eval_preds
predictions = np.argmax(logits, axis=-1)
return metric.compute(predictions=predictions, references=labels)
```
However, the compute_metrics seems to be constrained only to receive a logit, label tuple.
This is insufficient for some metrics that also depend on the original sentence. An example is [SARI](https://huggingface.co/spaces/evaluate-metric/sari), which is currently implemented in the evaluate library.
Being unable to use the original input in the evaluation makes it impossible to use the Trainer for some seq2seq tasks, e.g. simplification.
### Your contribution
If the request is accepted, I will try to contribute with a PR.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19135/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19135/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19134
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19134/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19134/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19134/events
|
https://github.com/huggingface/transformers/pull/19134
| 1,380,553,444
|
PR_kwDOCUB6oc4_VQdE
| 19,134
|
Added the option to specify "use_one_hot=True" in the forward pass/mo…
|
{
"login": "TanjaBaeumel",
"id": 96110322,
"node_id": "U_kgDOBbqG8g",
"avatar_url": "https://avatars.githubusercontent.com/u/96110322?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TanjaBaeumel",
"html_url": "https://github.com/TanjaBaeumel",
"followers_url": "https://api.github.com/users/TanjaBaeumel/followers",
"following_url": "https://api.github.com/users/TanjaBaeumel/following{/other_user}",
"gists_url": "https://api.github.com/users/TanjaBaeumel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TanjaBaeumel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TanjaBaeumel/subscriptions",
"organizations_url": "https://api.github.com/users/TanjaBaeumel/orgs",
"repos_url": "https://api.github.com/users/TanjaBaeumel/repos",
"events_url": "https://api.github.com/users/TanjaBaeumel/events{/privacy}",
"received_events_url": "https://api.github.com/users/TanjaBaeumel/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19134). All of your documentation changes will be reflected on that endpoint.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,663
| 1,667
| 1,667
|
NONE
| null |
…del call. Allows to have optimizable inputs in the vector space of WordPiece.
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19134/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19134/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19134",
"html_url": "https://github.com/huggingface/transformers/pull/19134",
"diff_url": "https://github.com/huggingface/transformers/pull/19134.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19134.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/19133
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19133/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19133/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19133/events
|
https://github.com/huggingface/transformers/pull/19133
| 1,380,492,828
|
PR_kwDOCUB6oc4_VEQE
| 19,133
|
Fix FlaxPretTrainedModel pt weights check
|
{
"login": "mishig25",
"id": 11827707,
"node_id": "MDQ6VXNlcjExODI3NzA3",
"avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mishig25",
"html_url": "https://github.com/mishig25",
"followers_url": "https://api.github.com/users/mishig25/followers",
"following_url": "https://api.github.com/users/mishig25/following{/other_user}",
"gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mishig25/subscriptions",
"organizations_url": "https://api.github.com/users/mishig25/orgs",
"repos_url": "https://api.github.com/users/mishig25/repos",
"events_url": "https://api.github.com/users/mishig25/events{/privacy}",
"received_events_url": "https://api.github.com/users/mishig25/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,663
| 1,663
| 1,663
|
CONTRIBUTOR
| null |
# What does this PR do?
[files tab](https://github.com/huggingface/transformers/pull/19133/files) should make it clear on what's being changed.
```py
if os.path.join(pretrained_model_name_or_path, WEIGHTS_NAME)
# will evaluate always to True as long as `pretrained_model_name_or_path` is a non-empty str
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19133/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19133/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19133",
"html_url": "https://github.com/huggingface/transformers/pull/19133",
"diff_url": "https://github.com/huggingface/transformers/pull/19133.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19133.patch",
"merged_at": 1663762624000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19132
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19132/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19132/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19132/events
|
https://github.com/huggingface/transformers/pull/19132
| 1,380,255,061
|
PR_kwDOCUB6oc4_UVEv
| 19,132
|
oneccl_bindings_for_pytorch 1.12.0 prebuilt wheel does not work with …
|
{
"login": "sywangyi",
"id": 36058628,
"node_id": "MDQ6VXNlcjM2MDU4NjI4",
"avatar_url": "https://avatars.githubusercontent.com/u/36058628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sywangyi",
"html_url": "https://github.com/sywangyi",
"followers_url": "https://api.github.com/users/sywangyi/followers",
"following_url": "https://api.github.com/users/sywangyi/following{/other_user}",
"gists_url": "https://api.github.com/users/sywangyi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sywangyi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sywangyi/subscriptions",
"organizations_url": "https://api.github.com/users/sywangyi/orgs",
"repos_url": "https://api.github.com/users/sywangyi/repos",
"events_url": "https://api.github.com/users/sywangyi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sywangyi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"@yao-matrix @sgugger please notice the issue and help review",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19132). All of your documentation changes will be reflected on that endpoint.",
"@sgugger I agree with your point, so I upload another PR (https://github.com/huggingface/transformers/pull/19151) to only update the doc.",
"we will release a new oneCCL 1.12.1 to work with torch 1.12.1"
] | 1,663
| 1,666
| 1,666
|
CONTRIBUTOR
| null |
…PyTorch 1.12.1
raise error in this condition and update the doc
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
Fixes # (issue)
torch 1.12.1 does not work with intel ccl 1.12.0
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19132/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19132/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19132",
"html_url": "https://github.com/huggingface/transformers/pull/19132",
"diff_url": "https://github.com/huggingface/transformers/pull/19132.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19132.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/19131
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19131/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19131/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19131/events
|
https://github.com/huggingface/transformers/pull/19131
| 1,380,252,493
|
PR_kwDOCUB6oc4_UUko
| 19,131
|
[BugFix] Fix fsdp option on shard_grad_op.
|
{
"login": "ZHUI",
"id": 16911935,
"node_id": "MDQ6VXNlcjE2OTExOTM1",
"avatar_url": "https://avatars.githubusercontent.com/u/16911935?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZHUI",
"html_url": "https://github.com/ZHUI",
"followers_url": "https://api.github.com/users/ZHUI/followers",
"following_url": "https://api.github.com/users/ZHUI/following{/other_user}",
"gists_url": "https://api.github.com/users/ZHUI/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZHUI/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZHUI/subscriptions",
"organizations_url": "https://api.github.com/users/ZHUI/orgs",
"repos_url": "https://api.github.com/users/ZHUI/repos",
"events_url": "https://api.github.com/users/ZHUI/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZHUI/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Is there someone can review and merge this pr, thanks a lot! cc: @sgugger @ydshieh"
] | 1,663
| 1,663
| 1,663
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fix fsdp option on shard_grad_op.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19131/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19131/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19131",
"html_url": "https://github.com/huggingface/transformers/pull/19131",
"diff_url": "https://github.com/huggingface/transformers/pull/19131.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19131.patch",
"merged_at": 1663761383000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19130
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19130/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19130/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19130/events
|
https://github.com/huggingface/transformers/pull/19130
| 1,380,249,010
|
PR_kwDOCUB6oc4_UT6X
| 19,130
|
Remove duplicate parameters in run_clip.py
|
{
"login": "enze5088",
"id": 14285786,
"node_id": "MDQ6VXNlcjE0Mjg1Nzg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14285786?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/enze5088",
"html_url": "https://github.com/enze5088",
"followers_url": "https://api.github.com/users/enze5088/followers",
"following_url": "https://api.github.com/users/enze5088/following{/other_user}",
"gists_url": "https://api.github.com/users/enze5088/gists{/gist_id}",
"starred_url": "https://api.github.com/users/enze5088/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/enze5088/subscriptions",
"organizations_url": "https://api.github.com/users/enze5088/orgs",
"repos_url": "https://api.github.com/users/enze5088/repos",
"events_url": "https://api.github.com/users/enze5088/events{/privacy}",
"received_events_url": "https://api.github.com/users/enze5088/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,663
| 1,663
| 1,663
|
CONTRIBUTOR
| null |
# What does this PR do?
The overwrite_cache parameter in this file is declared twice. Remove one of the two.
https://github.com/huggingface/transformers/blob/main/examples/pytorch/contrastive-image-text/run_clip.py
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same person ---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19130/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19130/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19130",
"html_url": "https://github.com/huggingface/transformers/pull/19130",
"diff_url": "https://github.com/huggingface/transformers/pull/19130.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19130.patch",
"merged_at": 1663958921000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19129
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19129/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19129/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19129/events
|
https://github.com/huggingface/transformers/pull/19129
| 1,380,076,458
|
PR_kwDOCUB6oc4_TxpF
| 19,129
|
Add doctests to Perceiver examples
|
{
"login": "stevenmanton",
"id": 3666725,
"node_id": "MDQ6VXNlcjM2NjY3MjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/3666725?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevenmanton",
"html_url": "https://github.com/stevenmanton",
"followers_url": "https://api.github.com/users/stevenmanton/followers",
"following_url": "https://api.github.com/users/stevenmanton/following{/other_user}",
"gists_url": "https://api.github.com/users/stevenmanton/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevenmanton/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevenmanton/subscriptions",
"organizations_url": "https://api.github.com/users/stevenmanton/orgs",
"repos_url": "https://api.github.com/users/stevenmanton/repos",
"events_url": "https://api.github.com/users/stevenmanton/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevenmanton/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi @stevenmanton \r\n\r\nThank you a lot for the PR! Currently the test will fail for the following \r\n```bash\r\nFAILED src/transformers/models/perceiver/modeling_perceiver.py::transformers.models.perceiver.modeling_perceiver.PerceiverForImageClassificationConvProcessing.forward\r\nFAILED src/transformers/models/perceiver/modeling_perceiver.py::transformers.models.perceiver.modeling_perceiver.PerceiverForImageClassificationFourier.forward\r\nFAILED src/transformers/models/perceiver/modeling_perceiver.py::transformers.models.perceiver.modeling_perceiver.PerceiverForImageClassificationLearned.forward\r\n```\r\nWe should add the expected outputs in the doc examples.\r\nFor other doc examples in this model, the tests pass, but they don't have outputs to test.\r\nFor example, `PerceiverForOpticalFlow` has\r\n```\r\n>>> logits = outputs.logits\r\n```\r\nwithout output and the expected value. We can have something like\r\n\r\n```python\r\n>>> list(logits .shape)\r\nexpected shapes\r\n```\r\nWould you like to fix and enhance those doc examples?\r\n\r\nOnce you have a change (staged or commited), you can run the test like\r\n```\r\npython utils/prepare_for_doc_test.py src/transformers/models/perceiver/modeling_perceiver.py\r\npytest --doctest-modules src/transformers/models/perceiver/modeling_perceiver.py -sv --doctest-continue-on-failure\r\n```\r\nOnce the test is run, you can clean up the git status before further changes or push.\r\n\r\nThanks!",
"@ydshieh thanks for the quick feedback! Yes, I noticed those tests were failing, but I thought it might be something about my local environment and the extra newline stuff. I just pushed a fix, which passes locally.\r\n\r\nBy the way, how did you know those tests were failing? The CI pipelines all seemed to be passing. Did you have to checkout my branch and run it locally?",
"@stevenmanton Thanks for the push. However, instead of changing to\r\n\r\n```python\r\n>>> predicted_class = model.config.id2label[predicted_class_idx]\r\n```\r\nwe should change it to \r\n```python\r\n>>> print(\"Predicted class:\", model.config.id2label[predicted_class_idx])\r\nadd some output here - so the doctest will test again it\r\n```\r\nYou can find similar work is done \r\nhttps://github.com/huggingface/transformers/blob/d5848a574a3990c95f20512673ecef9f57e0fe81/src/transformers/models/deit/modeling_deit.py#L735-L736\r\n\r\nOtherwise, the example has nothing to be tested.\r\n\r\n> By the way, how did you know those tests were failing? The CI pipelines all seemed to be passing. Did you have to checkout my branch and run it locally?\r\n\r\nYes. The PR CI on CircleCI does not run doctest :-). It is run after the PR being merged.",
"@ydshieh Thanks for your feedback and patience. I believe I've corrected it. The extra newline stuff is confusing, but if you run `prepare_for_doc_test.py` (which I think just adds a newline to all the docstrings) on the last commit, all tests pass for me locally. ",
"Thanks! As we are going to run the doctest for this model, would you like to add some expected outputs at the following places? \r\n\r\nhttps://github.com/huggingface/transformers/blob/bebcb950c7ad4dc1ef676806a6ac4283df7f5885/src/transformers/models/perceiver/modeling_perceiver.py#L1921\r\n\r\nhttps://github.com/huggingface/transformers/blob/bebcb950c7ad4dc1ef676806a6ac4283df7f5885/src/transformers/models/perceiver/modeling_perceiver.py#L1695\r\n\r\nhttps://github.com/huggingface/transformers/blob/bebcb950c7ad4dc1ef676806a6ac4283df7f5885/src/transformers/models/perceiver/modeling_perceiver.py#L1131\r\n\r\nhttps://github.com/huggingface/transformers/blob/bebcb950c7ad4dc1ef676806a6ac4283df7f5885/src/transformers/models/perceiver/modeling_perceiver.py#L1033\r\n\r\nhttps://github.com/huggingface/transformers/blob/bebcb950c7ad4dc1ef676806a6ac4283df7f5885/src/transformers/models/perceiver/modeling_perceiver.py#L1021\r\n\r\nAnd a few places in this doc example (for `logits` and `loss`)\r\n\r\nhttps://github.com/huggingface/transformers/blob/bebcb950c7ad4dc1ef676806a6ac4283df7f5885/src/transformers/models/perceiver/modeling_perceiver.py#L1695\r\n\r\nFor `logits`, it would look like to add (the values should be collected from the run)\r\n```\r\n >>> list(logits.shape)\r\n [1, 196, 8192]\r\n```",
"When running `prepare_for_doc_test.py`, it will add some empty lines - to make doctest pass. That is why we should stage our change, run that script, run doctest, and discard the change before commit or further changes :-)",
"@ydshieh Ok, I added some more checks for the sizes of logits. They all pass for me locally."
] | 1,663
| 1,663
| 1,663
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Related to #16292
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [X] Did you write any new necessary tests?
## Who can review?
Taken from #16292 : @patrickvonplaten @ydshieh @patil-suraj
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19129/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19129/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19129",
"html_url": "https://github.com/huggingface/transformers/pull/19129",
"diff_url": "https://github.com/huggingface/transformers/pull/19129.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19129.patch",
"merged_at": 1663953575000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19128
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19128/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19128/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19128/events
|
https://github.com/huggingface/transformers/pull/19128
| 1,380,053,307
|
PR_kwDOCUB6oc4_Ts2p
| 19,128
|
Document and validate typical_p in generation
|
{
"login": "mapmeld",
"id": 643918,
"node_id": "MDQ6VXNlcjY0MzkxOA==",
"avatar_url": "https://avatars.githubusercontent.com/u/643918?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mapmeld",
"html_url": "https://github.com/mapmeld",
"followers_url": "https://api.github.com/users/mapmeld/followers",
"following_url": "https://api.github.com/users/mapmeld/following{/other_user}",
"gists_url": "https://api.github.com/users/mapmeld/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mapmeld/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mapmeld/subscriptions",
"organizations_url": "https://api.github.com/users/mapmeld/orgs",
"repos_url": "https://api.github.com/users/mapmeld/repos",
"events_url": "https://api.github.com/users/mapmeld/events{/privacy}",
"received_events_url": "https://api.github.com/users/mapmeld/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,663
| 1,664
| 1,664
|
CONTRIBUTOR
| null |
# What does this PR do?
Throws a `ValueError` when `typical_p` argument is provided to text-generation, but its value or `do_sample=False` prevent typical decoding from happening as intended. Adds a line documenting typical decoding.
Most arguments to generate were previously covered in #18261 , but not `typical_p`.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19128/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19128/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19128",
"html_url": "https://github.com/huggingface/transformers/pull/19128",
"diff_url": "https://github.com/huggingface/transformers/pull/19128.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19128.patch",
"merged_at": 1664376306000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19127
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19127/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19127/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19127/events
|
https://github.com/huggingface/transformers/issues/19127
| 1,379,994,302
|
I_kwDOCUB6oc5SQQq-
| 19,127
|
document-question-answering pipeline does not work with some models
|
{
"login": "osanseviero",
"id": 7246357,
"node_id": "MDQ6VXNlcjcyNDYzNTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7246357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/osanseviero",
"html_url": "https://github.com/osanseviero",
"followers_url": "https://api.github.com/users/osanseviero/followers",
"following_url": "https://api.github.com/users/osanseviero/following{/other_user}",
"gists_url": "https://api.github.com/users/osanseviero/gists{/gist_id}",
"starred_url": "https://api.github.com/users/osanseviero/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/osanseviero/subscriptions",
"organizations_url": "https://api.github.com/users/osanseviero/orgs",
"repos_url": "https://api.github.com/users/osanseviero/repos",
"events_url": "https://api.github.com/users/osanseviero/events{/privacy}",
"received_events_url": "https://api.github.com/users/osanseviero/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"The `model_type` in the config.json of this specific model seems to be wrong. The types currently supported that would work with LayoutLM are:\r\n- `layoutlm`\r\n- `layoutlmv2`\r\n- `layoutlmv3`\r\n- `layoutxlm`\r\n\r\nThe specified type is `layoutlm-tc`.",
"Cc @ankrgyl ",
"From the `transformers` side, I think the error could be a bit more descriptive/informative than having a `KeyError`.",
"I had a bit of discussion with @NielsRogge about this. The model type here is different because this model actually has a slightly different architecture than standard LayoutLM (it has an additional token classifier head). @NielsRogge was kind enough to submit a PR (https://huggingface.co/impira/layoutlm-invoices/discussions/1) which changes it to `layoutlm`.\r\n\r\nWith this change (now merged), your code above should run just fine. However, you will likely get suboptimal results, because the model has learned to depend on the token classifier to produce accurate results. I'd recommend running it through DocQuery (https://github.com/impira/docquery) which has a patched version of the model ([here](https://github.com/impira/docquery/blob/main/src/docquery/ext/model.py#L152)) that makes use of it.\r\n\r\nYou can do that via something like:\r\n\r\n```\r\n!apt install tesseract-ocr\r\n!apt install libtesseract-dev\r\n!pip install Pillow\r\n!pip install pytesseract\r\n!pip install docquery\r\n\r\n# You can use a http link, a local path or a PIL.Image object\r\nimg_path = \"https://huggingface.co/spaces/impira/docquery/resolve/main/invoice.png\"\r\n\r\n# This is a patched version of the pipeline that knows how to use the token classifier\r\nfrom docquery import pipeline\r\n\r\n# This works\r\npipe = pipeline(\"document-question-answering\", model=\"impira/layoutlm-document-qa\")\r\n\r\n# This should work\r\npipe = pipeline(\"document-question-answering\", model=\"impira/layoutlm-invoices\")\r\n```\r\n\r\nIn the meantime, I'll explore a few alternatives, e.g. packaging up the model directly in the repo or patching it a different way, so that it uses the token classifier.",
"@NielsRogge and @osanseviero just following up on this, we made the necessary changes in https://github.com/impira/docquery to keep the model working both in transformers directly and DocQuery, so at least from our side, we could close this issue.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"@osanseviero I believe this issue should be closable now (your original repro should now succeed). But please let me know if you see otherwise.",
"Sounds good! Thanks a lot for this!"
] | 1,663
| 1,666
| 1,666
|
MEMBER
| null |
### System Info
Colab, latest release
### Who can help?
@NielsRogge
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
!apt install tesseract-ocr
!apt install libtesseract-dev
!pip install Pillow
!pip install pytesseract
# You can use a http link, a local path or a PIL.Image object
img_path = "https://huggingface.co/spaces/impira/docquery/resolve/main/invoice.png"
from transformers import pipeline
# This works
pipe = pipeline("document-question-answering", model="impira/layoutlm-document-qa")
# This breaks with strange error
pipe = pipeline("document-question-answering", model="impira/layoutlm-invoices")
# Error: KeyError: 'layoutlm-tc'
```
### Expected behavior
This would work with both models
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19127/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19127/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19126
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19126/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19126/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19126/events
|
https://github.com/huggingface/transformers/pull/19126
| 1,379,720,156
|
PR_kwDOCUB6oc4_SntA
| 19,126
|
Fix None loss in docstring for Wav2Vec2ForPretraining
|
{
"login": "abdouaziz",
"id": 39220574,
"node_id": "MDQ6VXNlcjM5MjIwNTc0",
"avatar_url": "https://avatars.githubusercontent.com/u/39220574?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abdouaziz",
"html_url": "https://github.com/abdouaziz",
"followers_url": "https://api.github.com/users/abdouaziz/followers",
"following_url": "https://api.github.com/users/abdouaziz/following{/other_user}",
"gists_url": "https://api.github.com/users/abdouaziz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abdouaziz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abdouaziz/subscriptions",
"organizations_url": "https://api.github.com/users/abdouaziz/orgs",
"repos_url": "https://api.github.com/users/abdouaziz/repos",
"events_url": "https://api.github.com/users/abdouaziz/events{/privacy}",
"received_events_url": "https://api.github.com/users/abdouaziz/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,663
| 1,667
| 1,667
|
NONE
| null |
- [ ] This PR Fix None loss in docstring for Wav2Vec2ForPreTraining
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19126/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19126/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19126",
"html_url": "https://github.com/huggingface/transformers/pull/19126",
"diff_url": "https://github.com/huggingface/transformers/pull/19126.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19126.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/19125
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19125/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19125/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19125/events
|
https://github.com/huggingface/transformers/pull/19125
| 1,379,681,116
|
PR_kwDOCUB6oc4_Sfsm
| 19,125
|
[Wav2Vec2] Fix None loss in docstring for Wav2Vec2ForPreTraining
|
{
"login": "abdouaziz",
"id": 39220574,
"node_id": "MDQ6VXNlcjM5MjIwNTc0",
"avatar_url": "https://avatars.githubusercontent.com/u/39220574?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abdouaziz",
"html_url": "https://github.com/abdouaziz",
"followers_url": "https://api.github.com/users/abdouaziz/followers",
"following_url": "https://api.github.com/users/abdouaziz/following{/other_user}",
"gists_url": "https://api.github.com/users/abdouaziz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abdouaziz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abdouaziz/subscriptions",
"organizations_url": "https://api.github.com/users/abdouaziz/orgs",
"repos_url": "https://api.github.com/users/abdouaziz/repos",
"events_url": "https://api.github.com/users/abdouaziz/events{/privacy}",
"received_events_url": "https://api.github.com/users/abdouaziz/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[] | 1,663
| 1,663
| 1,663
|
NONE
| null |
- [ ] This PR fix None loss in docstring for Wav2Vec2ForPretraining
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19125/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19125/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19125",
"html_url": "https://github.com/huggingface/transformers/pull/19125",
"diff_url": "https://github.com/huggingface/transformers/pull/19125.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19125.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/19124
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19124/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19124/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19124/events
|
https://github.com/huggingface/transformers/pull/19124
| 1,379,664,206
|
PR_kwDOCUB6oc4_ScLE
| 19,124
|
Sharding fails in TF when absolute scope was modified if `.` in layer name
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1834054694,
"node_id": "MDU6TGFiZWwxODM0MDU0Njk0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/TensorFlow",
"name": "TensorFlow",
"color": "FF6F00",
"default": false,
"description": "Anything TensorFlow"
},
{
"id": 1834056761,
"node_id": "MDU6TGFiZWwxODM0MDU2NzYx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Modeling",
"name": "Core: Modeling",
"color": "FF8446",
"default": false,
"description": "Internals of the library; Models."
}
] |
closed
| false
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Do you think we should add a special test for that @sgugger ? ",
"On it! 🤗",
"Just added a test, RAG works, I think I'll try to test most of our models just to be sure that there is no other strange pattern. ",
"Tested with `\"openai/clip-vit-large-patch14\", \"xlm-roberta-base\"` in addition, works nicely. "
] | 1,663
| 1,665
| 1,665
|
COLLABORATOR
| null |
# What does this PR do?
Fixes #18776, by taking care of the particular case of absolute scope modifications
## Who can review?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19124/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19124/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19124",
"html_url": "https://github.com/huggingface/transformers/pull/19124",
"diff_url": "https://github.com/huggingface/transformers/pull/19124.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19124.patch",
"merged_at": 1665765273000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19123
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19123/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19123/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19123/events
|
https://github.com/huggingface/transformers/issues/19123
| 1,379,528,584
|
I_kwDOCUB6oc5SOe-I
| 19,123
|
Adding TensorFlow port of LeViT
|
{
"login": "ariG23498",
"id": 36856589,
"node_id": "MDQ6VXNlcjM2ODU2NTg5",
"avatar_url": "https://avatars.githubusercontent.com/u/36856589?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ariG23498",
"html_url": "https://github.com/ariG23498",
"followers_url": "https://api.github.com/users/ariG23498/followers",
"following_url": "https://api.github.com/users/ariG23498/following{/other_user}",
"gists_url": "https://api.github.com/users/ariG23498/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ariG23498/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ariG23498/subscriptions",
"organizations_url": "https://api.github.com/users/ariG23498/orgs",
"repos_url": "https://api.github.com/users/ariG23498/repos",
"events_url": "https://api.github.com/users/ariG23498/events{/privacy}",
"received_events_url": "https://api.github.com/users/ariG23498/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] |
open
| false
|
{
"login": "ariG23498",
"id": 36856589,
"node_id": "MDQ6VXNlcjM2ODU2NTg5",
"avatar_url": "https://avatars.githubusercontent.com/u/36856589?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ariG23498",
"html_url": "https://github.com/ariG23498",
"followers_url": "https://api.github.com/users/ariG23498/followers",
"following_url": "https://api.github.com/users/ariG23498/following{/other_user}",
"gists_url": "https://api.github.com/users/ariG23498/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ariG23498/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ariG23498/subscriptions",
"organizations_url": "https://api.github.com/users/ariG23498/orgs",
"repos_url": "https://api.github.com/users/ariG23498/repos",
"events_url": "https://api.github.com/users/ariG23498/events{/privacy}",
"received_events_url": "https://api.github.com/users/ariG23498/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ariG23498",
"id": 36856589,
"node_id": "MDQ6VXNlcjM2ODU2NTg5",
"avatar_url": "https://avatars.githubusercontent.com/u/36856589?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ariG23498",
"html_url": "https://github.com/ariG23498",
"followers_url": "https://api.github.com/users/ariG23498/followers",
"following_url": "https://api.github.com/users/ariG23498/following{/other_user}",
"gists_url": "https://api.github.com/users/ariG23498/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ariG23498/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ariG23498/subscriptions",
"organizations_url": "https://api.github.com/users/ariG23498/orgs",
"repos_url": "https://api.github.com/users/ariG23498/repos",
"events_url": "https://api.github.com/users/ariG23498/events{/privacy}",
"received_events_url": "https://api.github.com/users/ariG23498/received_events",
"type": "User",
"site_admin": false
}
] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"I am still working on this."
] | 1,663
| 1,666
| null |
CONTRIBUTOR
| null |
### Feature request
To add the TensorFlow port of the [LeViT](https://arxiv.org/abs/2104.01136) architecture. The architecture is currently present in the Transformers library in [PyTorch](https://github.com/huggingface/transformers/blob/main/src/transformers/models/levit/modeling_levit.py).
### Motivation
[LeViT](https://arxiv.org/abs/2104.01136) is a family of architectures that optimize the trade-off between accuracy and efficiency in a high-speed regime. The TensorFlow port would be an addition to the hybrid architecutre families.
### Your contribution
I would like to make the contribution by building out the TensorFlow port.
Tagging: @amyeroberts who could assign me to the task of adding the TensorFlow port of the model.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19123/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19123/timeline
| null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/19122
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19122/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19122/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19122/events
|
https://github.com/huggingface/transformers/pull/19122
| 1,379,436,771
|
PR_kwDOCUB6oc4_RsTS
| 19,122
|
Skip `test_export_to_onnx` for `LongT5` if `torch` < 1.11
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,663
| 1,663
| 1,663
|
COLLABORATOR
| null |
# What does this PR do?
With torch `1.10`, We get some exception from C++ file.
```bash
Exception raised from index_select_out_cpu_ at ../aten/src/ATen/native/TensorAdvancedIndexing.cpp:887 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x42 (0x7f60b7f4dd62 in /home/yih_dar_huggingface_co/miniconda3/envs/py39/lib/python3.9/site-packages/torch/lib/libc10.so)
```
Skip this test for torch < 1.11: **Make past CI clean**
P.S. this test is only defined for 3 models (T5, LongT5 and FSMT), but skipped (without any condition) for T5 and FSMT.
It should be fine to remove this test, and rely on `tests/onnx`.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19122/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19122/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19122",
"html_url": "https://github.com/huggingface/transformers/pull/19122",
"diff_url": "https://github.com/huggingface/transformers/pull/19122.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19122.patch",
"merged_at": 1663703538000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19121
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19121/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19121/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19121/events
|
https://github.com/huggingface/transformers/pull/19121
| 1,379,386,784
|
PR_kwDOCUB6oc4_Rhmj
| 19,121
|
german processing
|
{
"login": "flozi00",
"id": 47894090,
"node_id": "MDQ6VXNlcjQ3ODk0MDkw",
"avatar_url": "https://avatars.githubusercontent.com/u/47894090?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/flozi00",
"html_url": "https://github.com/flozi00",
"followers_url": "https://api.github.com/users/flozi00/followers",
"following_url": "https://api.github.com/users/flozi00/following{/other_user}",
"gists_url": "https://api.github.com/users/flozi00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/flozi00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/flozi00/subscriptions",
"organizations_url": "https://api.github.com/users/flozi00/orgs",
"repos_url": "https://api.github.com/users/flozi00/repos",
"events_url": "https://api.github.com/users/flozi00/events{/privacy}",
"received_events_url": "https://api.github.com/users/flozi00/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,663
| 1,663
| 1,663
|
CONTRIBUTOR
| null |
continues https://github.com/huggingface/transformers/issues/18564 @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19121/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19121/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19121",
"html_url": "https://github.com/huggingface/transformers/pull/19121",
"diff_url": "https://github.com/huggingface/transformers/pull/19121.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19121.patch",
"merged_at": 1663679902000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19120
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19120/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19120/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19120/events
|
https://github.com/huggingface/transformers/pull/19120
| 1,379,091,227
|
PR_kwDOCUB6oc4_QiWQ
| 19,120
|
Add LayoutLMv2ForRelationExtraction
|
{
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19120). All of your documentation changes will be reflected on that endpoint.",
"@sgugger I'm getting the following error from `make fixup`:\r\n\r\n```\r\nChecking all objects are properly documented.\r\nTraceback (most recent call last):\r\n File \"/home/niels/python_projects/transformers/utils/check_repo.py\", line 788, in <module>\r\n check_repo_quality()\r\n File \"/home/niels/python_projects/transformers/utils/check_repo.py\", line 782, in check_repo_quality\r\n check_all_objects_are_documented()\r\n File \"/home/niels/python_projects/transformers/utils/check_repo.py\", line 693, in check_all_objects_are_documented\r\n raise Exception(\r\nException: The following objects are in the public init so should be documented:\r\n - LayoutLMv2ForRelationExtraction\r\n```\r\n\r\nHowever, this model is added to layoutlmv2.mdx, so not sure why this error occurs.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hey @NielsRogge , any chance that this will ever be implemented?\r\nLooking around the history of the PR and Iissues, it seems that there was a fair bit of interest",
"Hi @lamaeldo,\r\n\r\nThe reason the PR wasn't merged is because models need to output fixed size tensors, to make sure things like distributed training and ONNX export work. However LayoutLMv2ForRelationExtraction outputs lists of tensors in its current implementation, due to each example in the batch having a different amount of relations. So we would need to pad them up to a fixed size such that the model outputs fixed size tensors.\r\n\r\nHaven't looked into that yet but if you're willing to contribute, let me know!\r\n\r\nBtw I do have a notebook on fine-tuning this model [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/LayoutXLM).\r\n\r\n"
] | 1,663
| 1,681
| 1,667
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR adds the relation extraction head of LayoutLMv2, which was a highly requested feature as seen in #14330 #15451 #18091
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19120/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19120/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19120",
"html_url": "https://github.com/huggingface/transformers/pull/19120",
"diff_url": "https://github.com/huggingface/transformers/pull/19120.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19120.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/19119
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19119/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19119/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19119/events
|
https://github.com/huggingface/transformers/pull/19119
| 1,379,065,517
|
PR_kwDOCUB6oc4_Qc37
| 19,119
|
Fix BeitFeatureExtractor postprocessing
|
{
"login": "alaradirik",
"id": 8944735,
"node_id": "MDQ6VXNlcjg5NDQ3MzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/8944735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alaradirik",
"html_url": "https://github.com/alaradirik",
"followers_url": "https://api.github.com/users/alaradirik/followers",
"following_url": "https://api.github.com/users/alaradirik/following{/other_user}",
"gists_url": "https://api.github.com/users/alaradirik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alaradirik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alaradirik/subscriptions",
"organizations_url": "https://api.github.com/users/alaradirik/orgs",
"repos_url": "https://api.github.com/users/alaradirik/repos",
"events_url": "https://api.github.com/users/alaradirik/events{/privacy}",
"received_events_url": "https://api.github.com/users/alaradirik/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hey @alaradirik, please also ping a core maintainer for review before merging PRs."
] | 1,663
| 1,663
| 1,663
|
CONTRIBUTOR
| null |
# What does this PR do?
- Fixes a `BeitFeatureExtractor.post_process_semantic_segmentation()` assertion error when no `target_sizes` argument is provided
- Ensures post_process_semantic_segmentation returns a list of int64 PyTorch tensors
- Adds a test to ensure correct post-processing
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [X ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19119/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19119/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19119",
"html_url": "https://github.com/huggingface/transformers/pull/19119",
"diff_url": "https://github.com/huggingface/transformers/pull/19119.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19119.patch",
"merged_at": 1663689220000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19118
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19118/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19118/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19118/events
|
https://github.com/huggingface/transformers/issues/19118
| 1,379,012,943
|
I_kwDOCUB6oc5SMhFP
| 19,118
|
CLIPTokenizer behaves inconsistently depending on whether ftfy is installed or not
|
{
"login": "kjsman",
"id": 12594709,
"node_id": "MDQ6VXNlcjEyNTk0NzA5",
"avatar_url": "https://avatars.githubusercontent.com/u/12594709?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kjsman",
"html_url": "https://github.com/kjsman",
"followers_url": "https://api.github.com/users/kjsman/followers",
"following_url": "https://api.github.com/users/kjsman/following{/other_user}",
"gists_url": "https://api.github.com/users/kjsman/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kjsman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kjsman/subscriptions",
"organizations_url": "https://api.github.com/users/kjsman/orgs",
"repos_url": "https://api.github.com/users/kjsman/repos",
"events_url": "https://api.github.com/users/kjsman/events{/privacy}",
"received_events_url": "https://api.github.com/users/kjsman/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"This happens because `BasicTokenizer`, which is [used as](https://github.com/huggingface/transformers/blob/main/src/transformers/models/clip/tokenization_clip.py#L159) fallback text fix function, [strips accents if `do_lower_case=True`](https://github.com/huggingface/transformers/blob/main/src/transformers/models/bert/tokenization_bert.py#L174-L176).\r\n\r\nWe may fix this by explicitly set `strip_accents` to `False`; [ViT/L-14 tokenizer](https://huggingface.co/openai/clip-vit-large-patch14/raw/main/tokenizer.json) includes vocabs with accents, so I think stripping accents should not be done.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,663
| 1,667
| 1,667
|
NONE
| null |
### System Info
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.22.1
- Platform: Linux-5.10.133+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.14
- Huggingface_hub version: 0.9.1
- PyTorch version (GPU?): 1.12.1+cu113 (False)
- Tensorflow version (GPU?): 2.8.2 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@patil-suraj
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Run following code without ftfy installed.
```py
from transformers import CLIPTokenizer
tokenizer = CLIPTokenizer.from_pretrained("openai/clip-vit-large-patch14")
tokenizer("résumé") # {'input_ids': [49406, 15077, 49407], 'attention_mask': [1, 1, 1]}
```
2. Run following code with ftfy installed.
```py
from transformers import CLIPTokenizer
tokenizer = CLIPTokenizer.from_pretrained("openai/clip-vit-large-patch14")
tokenizer("résumé") # {'input_ids': [49406, 29106, 7054, 4166, 49407], 'attention_mask': [1, 1, 1, 1, 1]}
```
### Expected behavior
They should work consistently.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19118/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19118/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19117
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19117/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19117/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19117/events
|
https://github.com/huggingface/transformers/pull/19117
| 1,378,994,669
|
PR_kwDOCUB6oc4_QN5p
| 19,117
|
Fix the wrong schedule in runner check CI
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,663
| 1,663
| 1,663
|
COLLABORATOR
| null |
# What does this PR do?
The current (wrong) schedule in `check_runner_status.yml`:
`* */1 * * *` -> “At every minute past every hour.”
But we want
`0 */1 * * *` -> “At minute 0 past every hour.”
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19117/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19117/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19117",
"html_url": "https://github.com/huggingface/transformers/pull/19117",
"diff_url": "https://github.com/huggingface/transformers/pull/19117.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19117.patch",
"merged_at": 1663674415000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19116
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19116/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19116/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19116/events
|
https://github.com/huggingface/transformers/issues/19116
| 1,378,979,491
|
I_kwDOCUB6oc5SMY6j
| 19,116
|
HfArgumentParser support yaml parser
|
{
"login": "jiangwangyi",
"id": 39762734,
"node_id": "MDQ6VXNlcjM5NzYyNzM0",
"avatar_url": "https://avatars.githubusercontent.com/u/39762734?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jiangwangyi",
"html_url": "https://github.com/jiangwangyi",
"followers_url": "https://api.github.com/users/jiangwangyi/followers",
"following_url": "https://api.github.com/users/jiangwangyi/following{/other_user}",
"gists_url": "https://api.github.com/users/jiangwangyi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jiangwangyi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiangwangyi/subscriptions",
"organizations_url": "https://api.github.com/users/jiangwangyi/orgs",
"repos_url": "https://api.github.com/users/jiangwangyi/repos",
"events_url": "https://api.github.com/users/jiangwangyi/events{/privacy}",
"received_events_url": "https://api.github.com/users/jiangwangyi/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1990918270,
"node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue",
"name": "Good First Issue",
"color": "bbf794",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] |
[
"cc @sgugger \r\n\r\nIf you want to open a PR, please go ahead!",
"You can just use\r\n`parser.parse_dict(yaml.safe_load(f))`",
"Which could all go in a `parse_yaml_file` method :-) Doing this and also refactoring the `parse_json_file` to use `parse_dict`, as well as adding small tests would be nice additions that shouldn't be too hard, so putting the \"Good first issue\" label here.\r\n\r\nTo summarize:\r\n- [ ] adding as `parse_yaml_file` method to `HfArgumentParser` with the code above\r\n- [ ] refactor the dupe code between `parse_json_file` and `parse_dict` similar to the code above\r\n- [ ] add a small test of `parse_yaml_file`\r\n- [ ] add a small test of `parse_json_file`\r\n\r\nThis could be done in a single PR or separate ones :-)",
"\r\nHi, I would like to work on it\r\n\r\n",
"How can i write test for `parse_yaml_file` and `parse_json_file` it will require an external json and yaml file to testing",
"No, you can create it during the test by saving some dictionary (look at the `parse_dict` tests) into a temporary file.",
"Hey, @sgugger I have written the test for `parse_yaml_file` and `parse_json_file` using tempfile is it acceptable?? Also it passes the tests.\r\n\r\n",
"You can also use the context manager for a temp dir.\r\n```\r\nwith tempfile.TemporaryDirectory() as tmp_dir:\r\n # Save file in tmp_dir as usual\r\n # do the tests\r\n```\r\nThe plus for this is that it's automatically cleaned up when you exit the with block (whereas the temp file will stay until the next restart).",
"Okay I will change that!"
] | 1,663
| 1,664
| 1,664
|
CONTRIBUTOR
| null |
### Feature request
HfArgumentParser now supports for parsing dict and json files, will it be possible to support for parsing the widely used yaml files?
### Motivation
I think using yaml is a good way to record arguments.
### Your contribution
Not yet.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19116/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19116/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19115
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19115/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19115/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19115/events
|
https://github.com/huggingface/transformers/pull/19115
| 1,378,879,352
|
PR_kwDOCUB6oc4_P1Xk
| 19,115
|
[WIP] support auto-compress for glue task
|
{
"login": "ceci3",
"id": 29245900,
"node_id": "MDQ6VXNlcjI5MjQ1OTAw",
"avatar_url": "https://avatars.githubusercontent.com/u/29245900?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ceci3",
"html_url": "https://github.com/ceci3",
"followers_url": "https://api.github.com/users/ceci3/followers",
"following_url": "https://api.github.com/users/ceci3/following{/other_user}",
"gists_url": "https://api.github.com/users/ceci3/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ceci3/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ceci3/subscriptions",
"organizations_url": "https://api.github.com/users/ceci3/orgs",
"repos_url": "https://api.github.com/users/ceci3/repos",
"events_url": "https://api.github.com/users/ceci3/events{/privacy}",
"received_events_url": "https://api.github.com/users/ceci3/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19115). All of your documentation changes will be reflected on that endpoint.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,663
| 1,669
| 1,669
|
NONE
| null |
# What does this PR do?
support auto-compress for glue task
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19115/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19115/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19115",
"html_url": "https://github.com/huggingface/transformers/pull/19115",
"diff_url": "https://github.com/huggingface/transformers/pull/19115.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19115.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/19114
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19114/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19114/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19114/events
|
https://github.com/huggingface/transformers/issues/19114
| 1,378,725,479
|
I_kwDOCUB6oc5SLa5n
| 19,114
|
GPT Neox Japanese is in the release notes for v4.22.X but does not appear to be in the v4.22.X package.
|
{
"login": "SO0529",
"id": 67080255,
"node_id": "MDQ6VXNlcjY3MDgwMjU1",
"avatar_url": "https://avatars.githubusercontent.com/u/67080255?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SO0529",
"html_url": "https://github.com/SO0529",
"followers_url": "https://api.github.com/users/SO0529/followers",
"following_url": "https://api.github.com/users/SO0529/following{/other_user}",
"gists_url": "https://api.github.com/users/SO0529/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SO0529/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SO0529/subscriptions",
"organizations_url": "https://api.github.com/users/SO0529/orgs",
"repos_url": "https://api.github.com/users/SO0529/repos",
"events_url": "https://api.github.com/users/SO0529/events{/privacy}",
"received_events_url": "https://api.github.com/users/SO0529/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Hey @SO0529, sorry about that! That's on me, I missed this in the release notes. I just removed it. For now you can install the repo from source as you have noted in order to use it.\r\n\r\nThanks for letting me know!",
"Thank you for quick response and update the release note!\r\nWe are looking forward to the next release when GPT NeoX Japanese will be available.\r\nLet's close this issue. Thank you!"
] | 1,663
| 1,663
| 1,663
|
CONTRIBUTOR
| null |
### System Info
I've checked in Colab that GPT Neox Japanese is not in v4.22.1
```
- `transformers` version: 4.22.1
- Platform: Linux-5.10.133+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.14
- Huggingface_hub version: 0.9.1
- PyTorch version (GPU?): 1.12.1+cu113 (False)
- Tensorflow version (GPU?): 2.8.2 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```
### Who can help?
@LysandreJik
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Open colab
2. `!pip install transformers` to install the latest version (currently v4.22.1)
3. `from transformers import GPTNeoXJapaneseForCausalLM, GPTNeoXJapaneseTokenizer`
then raise below import error.
```---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
[<ipython-input-7-401722e84f23>](https://localhost:8080/#) in <module>
----> 1 from transformers import GPTNeoXJapaneseForCausalLM, GPTNeoXJapaneseTokenizer
ImportError: cannot import name 'GPTNeoXJapaneseForCausalLM' from 'transformers' (/usr/local/lib/python3.7/dist-packages/transformers/__init__.py)
---------------------------------------------------------------------------
NOTE: If your import is failing due to a missing package, you can
manually install dependencies using either !pip or !apt.
To view examples of installing some common dependencies, click the
"Open Examples" button below.
---------------------------------------------------------------------------
```
In [this release note](https://github.com/huggingface/transformers/releases/tag/v4.22.0), I thought we can use `GPT NeoX Japanese` from v4.22.0 release. That's not surprising, the model was not included in the zip file of the v4.22.0 release...
I would be glad to know the situation:smiley:
FYI
We can use GPT NeoX Japanese from `pip install git+https://github.com/huggingface/transformers` because it's in the main branch.
### Expected behavior
`GPT Neox Japanese` must be able to import correctly.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19114/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19114/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19113
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19113/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19113/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19113/events
|
https://github.com/huggingface/transformers/pull/19113
| 1,378,611,520
|
PR_kwDOCUB6oc4_O9Sw
| 19,113
|
Add a missing space in a script arg documentation
|
{
"login": "bryant1410",
"id": 3905501,
"node_id": "MDQ6VXNlcjM5MDU1MDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/3905501?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bryant1410",
"html_url": "https://github.com/bryant1410",
"followers_url": "https://api.github.com/users/bryant1410/followers",
"following_url": "https://api.github.com/users/bryant1410/following{/other_user}",
"gists_url": "https://api.github.com/users/bryant1410/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bryant1410/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bryant1410/subscriptions",
"organizations_url": "https://api.github.com/users/bryant1410/orgs",
"repos_url": "https://api.github.com/users/bryant1410/repos",
"events_url": "https://api.github.com/users/bryant1410/events{/privacy}",
"received_events_url": "https://api.github.com/users/bryant1410/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,663
| 1,663
| 1,663
|
CONTRIBUTOR
| null | null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19113/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19113/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19113",
"html_url": "https://github.com/huggingface/transformers/pull/19113",
"diff_url": "https://github.com/huggingface/transformers/pull/19113.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19113.patch",
"merged_at": 1663703012000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19112
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19112/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19112/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19112/events
|
https://github.com/huggingface/transformers/issues/19112
| 1,378,518,590
|
I_kwDOCUB6oc5SKoY-
| 19,112
|
Problem trying to migrate cache
|
{
"login": "mpolinsky",
"id": 30514239,
"node_id": "MDQ6VXNlcjMwNTE0MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/30514239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mpolinsky",
"html_url": "https://github.com/mpolinsky",
"followers_url": "https://api.github.com/users/mpolinsky/followers",
"following_url": "https://api.github.com/users/mpolinsky/following{/other_user}",
"gists_url": "https://api.github.com/users/mpolinsky/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mpolinsky/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mpolinsky/subscriptions",
"organizations_url": "https://api.github.com/users/mpolinsky/orgs",
"repos_url": "https://api.github.com/users/mpolinsky/repos",
"events_url": "https://api.github.com/users/mpolinsky/events{/privacy}",
"received_events_url": "https://api.github.com/users/mpolinsky/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"cc @sgugger ",
"I'm not too sure what the problem is, maybe it is due to the intermediate subfolder. This error should only happen once in any case, and you might have lost some cached files, but they will jsut be re-downloaded.",
"Thanks for the clarity. Didn't cause a problem otherwise.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,663
| 1,667
| 1,667
|
NONE
| null |
### System Info
Using MacOS Big Sur v11.6.4 and jupyter lab v3.4.7.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Installed transformers in a new venv with pip
2. Imported autocast from torch and diffusers in jupyter lab, it automatically attempted to migrate the cache and I received this error asking me to open an issue.
code:
```
from torch import autocast
from diffusers import StableDiffusionPipeline
```
<img width="918" alt="differror" src="https://user-images.githubusercontent.com/30514239/191115866-5e239ee8-23b5-4f7d-b55f-0646a63d86a7.png">
### Expected behavior
I opened this because the error messaged asked me to. I imagine the expected behavior is migrating the cache without incident.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19112/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19112/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19111
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19111/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19111/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19111/events
|
https://github.com/huggingface/transformers/issues/19111
| 1,378,363,296
|
I_kwDOCUB6oc5SKCeg
| 19,111
|
DPR pooler weights not loading correctly
|
{
"login": "maxmatical",
"id": 8890262,
"node_id": "MDQ6VXNlcjg4OTAyNjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8890262?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/maxmatical",
"html_url": "https://github.com/maxmatical",
"followers_url": "https://api.github.com/users/maxmatical/followers",
"following_url": "https://api.github.com/users/maxmatical/following{/other_user}",
"gists_url": "https://api.github.com/users/maxmatical/gists{/gist_id}",
"starred_url": "https://api.github.com/users/maxmatical/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/maxmatical/subscriptions",
"organizations_url": "https://api.github.com/users/maxmatical/orgs",
"repos_url": "https://api.github.com/users/maxmatical/repos",
"events_url": "https://api.github.com/users/maxmatical/events{/privacy}",
"received_events_url": "https://api.github.com/users/maxmatical/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
] |
[
"@ArthurZucker could you take a look here (happy to answer questions about the model)",
"on it! ",
"Hi @ArthurZucker , do you have any updates on this by chance? I'm getting the same issue, but the results I get when benchmarking do not suggest random initialization of the weights.",
"Hey, it seems that the issue comes from the following [line](https://github.com/huggingface/transformers/blob/main/src/transformers/models/dpr/modeling_dpr.py#L178), where it is hardcoded that `add_pooling_layer=False`. Setting it to `True` fixes the issue. Now I am not very familiar with the model, but a few test seems to be link to that checkpoint. Let me open a PR for a fix ! ",
"Hey! So as mentioned here in a previous [PR](https://github.com/huggingface/transformers/pull/15068/commits/95eaf44ce93bbf46b552c606957f98b354dae73a), the optional pooling layer was removed as no checkpoints use it. \r\n\r\nMy first question would then be : do you need to have the `BERTPoolerLayer`? It is a bit confusing indeed that the pooling output do not come from the `BertPoolerLayer`. Have a look at #14486, I think it explains pretty well what's going on here. \r\n\r\nWe have to ways to go about this : \r\n1. We add an argument in the config of DPR, and take care about updating the online config to have no breaking changes. \r\n2. If you don't need it, then we just add a warning/update the online weights doing `from_pretrained` then `push_to_hub` and the checkpoints will then not include the `pooler` weights 😄 ",
"Hi Arthur, I think I understand. Thanks for getting back so quickly! Its\nperformance suggests that the model is loading correctly so that must be\nit! Thanks!\n\nOn Thu, Oct 20, 2022 at 3:42 AM Arthur ***@***.***> wrote:\n\n> Hey! So as mentioned here in a previous PR\n> <https://github.com/huggingface/transformers/pull/15068/commits/95eaf44ce93bbf46b552c606957f98b354dae73a>,\n> the optional pooling layer was removed as no checkpoints use it.\n>\n> My first question would then be : do you need to have the BERTPoolerLayer?\n> It is a bit confusing indeed that the pooling output do not come from the\n> BertPoolerLayer. Have a look at #14486\n> <https://github.com/huggingface/transformers/issues/14486>, I think it\n> explains pretty well what's going on here.\n>\n> We have to ways to go about this :\n>\n> 1. We add an argument in the config of DPR, and take care about\n> updating the online config to have no breaking changes.\n> 2. If you don't need it, then we just add a warning/update the online\n> weights doing from_pretrained then push_to_hub and the checkpoints\n> will then not include the pooler weights 😄\n>\n> —\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/19111#issuecomment-1285084844>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AXTXAGMEKV3HT2J3QSWTHF3WEDZVZANCNFSM6AAAAAAQQLMVMQ>\n> .\n> You are receiving this because you commented.Message ID:\n> ***@***.***>\n>\n"
] | 1,663
| 1,668
| 1,668
|
NONE
| null |
### System Info
tested on multiple versions
- `transformers` version: 4.12.3
- Platform: Linux-4.14.281-212.502.amzn2.x86_64-x86_64-with-glibc2.10
- Python version: 3.8.10
- PyTorch version (GPU?): 1.11.0+cu102 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?:no
another environment
- `transformers` version: 4.16.2
- Platform: macOS-12.6-x86_64-i386-64bit
- Python version: 3.9.7
- PyTorch version (GPU?): 1.11.0+cu102 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: no
### Who can help?
@patrickvonplaten @lhoestq
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
from transformers import DPRContextEncoder, DPRQuestionEncoder
question_encoder_path = "facebook/dpr-question_encoder-single-nq-base" # can also be a custom checkpoint
answer_encoder_path = "facebook/dpr-ctx_encoder-single-nq-base"
DPRQuestionEncoder.from_pretrained(question_encoder_path)
DPRContextEncoder.from_pretrained(answer_encoder_path)
```
results in the following message
```
Some weights of the model checkpoint at facebook/dpr-question_encoder-single-nq-base were not used when initializing DPRQuestionEncoder: ['question_encoder.bert_model.pooler.dense.weight', 'question_encoder.bert_model.pooler.dense.bias']
- This IS expected if you are initializing DPRQuestionEncoder from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing DPRQuestionEncoder from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of the model checkpoint at facebook/dpr-ctx_encoder-single-nq-base were not used when initializing DPRContextEncoder: ['ctx_encoder.bert_model.pooler.dense.weight', 'ctx_encoder.bert_model.pooler.dense.bias']
- This IS expected if you are initializing DPRContextEncoder from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing DPRContextEncoder from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
```
### Expected behavior
Model loads successfully without re-intitializing weights
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19111/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19111/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19110
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19110/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19110/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19110/events
|
https://github.com/huggingface/transformers/pull/19110
| 1,378,296,181
|
PR_kwDOCUB6oc4_N5VR
| 19,110
|
Add documentation of Trainer.create_model_card
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,663
| 1,663
| 1,663
|
COLLABORATOR
| null |
# What does this PR do?
This PR adds some documentation for `Trainer.create_model_card` and fixes the type annotations.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19110/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19110/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19110",
"html_url": "https://github.com/huggingface/transformers/pull/19110",
"diff_url": "https://github.com/huggingface/transformers/pull/19110.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19110.patch",
"merged_at": 1663620951000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19109
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19109/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19109/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19109/events
|
https://github.com/huggingface/transformers/pull/19109
| 1,378,286,072
|
PR_kwDOCUB6oc4_N3Lh
| 19,109
|
Don't warn of move if cache is empty
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[] | 1,663
| 1,663
| 1,663
|
COLLABORATOR
| null |
# What does this PR do?
This PR makes sure the warning(s) about moving cache are only issued when there is a cache (so not on a fresh install).
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19109/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19109/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19109",
"html_url": "https://github.com/huggingface/transformers/pull/19109",
"diff_url": "https://github.com/huggingface/transformers/pull/19109.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19109.patch",
"merged_at": 1663615638000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19108
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19108/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19108/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19108/events
|
https://github.com/huggingface/transformers/issues/19108
| 1,378,089,737
|
I_kwDOCUB6oc5SI_sJ
| 19,108
|
Flax vs torch benchmark on Wav2vec2
|
{
"login": "ZurabDz",
"id": 34181252,
"node_id": "MDQ6VXNlcjM0MTgxMjUy",
"avatar_url": "https://avatars.githubusercontent.com/u/34181252?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZurabDz",
"html_url": "https://github.com/ZurabDz",
"followers_url": "https://api.github.com/users/ZurabDz/followers",
"following_url": "https://api.github.com/users/ZurabDz/following{/other_user}",
"gists_url": "https://api.github.com/users/ZurabDz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZurabDz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZurabDz/subscriptions",
"organizations_url": "https://api.github.com/users/ZurabDz/orgs",
"repos_url": "https://api.github.com/users/ZurabDz/repos",
"events_url": "https://api.github.com/users/ZurabDz/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZurabDz/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"maybe of interest to @sanchit-gandhi ",
"Hey @ZurabDz! `FlaxWav2Vec2ForCTC` should be faster than `Wav2Vec2ForCTC` **if** the `__call__` method is just in time (JIT) compiled (_c.f._ https://jax.readthedocs.io/en/latest/jax-101/02-jitting.html). Could you share your code for running this benchmark? We can then go through and make sure the Flax model is appropriately set-up to get max performance!\r\n\r\nAlso of interest: this notebook which JIT compiles the `__call__` method for BLOOM https://github.com/sanchit-gandhi/codesnippets/blob/main/check_flax_bloom_jit_small_testing.ipynb\r\nYou can see the speed up you get by JIT'ing the fprop! We can do something similar for your benchmark, comparing the iteration time of PyTorch to Flax (rather than the accuracy).",
"@sanchit-gandhi\r\nSo the code I use was something like this:\r\n\r\n```python3\r\n'''\r\n Trying inference with jax. Note: it errors out without modifying source code currently\r\n I was only concerned with speed so just silent errors: \r\n assigned self.config.do_stable_layer_norm = True in modeling_flax_wav2vec2.py\r\n assigned self.config.feat_extract_norm = \"layer\"\r\n'''\r\nfrom transformers import Wav2Vec2Processor, FlaxWav2Vec2ForCTC\r\n\r\nprocessor = Wav2Vec2Processor.from_pretrained(\"facebook/wav2vec2-base-960h\")\r\nmodel = FlaxWav2Vec2ForCTC.from_pretrained(\"facebook/wav2vec2-base-960h\", from_pt=True)\r\n\r\nsig, sr = torchaudio.load('out.mp3') # Make sure you have linux and ffmpeg 4 or use wav/mp3 format + soundfile/librosa\r\n# preprocess, this is computed in prefetch don't care what time will it take...(in my pipeline)\r\ninput_values = processor(sig[0], sampling_rate=16_000, return_tensors=\"pt\").input_values \r\n\r\n%%timeit # jupyter magic or you could use time \r\nlogits = model(input_values).logits\r\n```\r\n\r\n```python3\r\n''' \r\n Just standard inference nothing fancy\r\n'''\r\nfrom transformers import Wav2Vec2Processor, Wav2Vec2Processor\r\n\r\nprocessor = Wav2Vec2Processor.from_pretrained(\"facebook/wav2vec2-base-960h\")\r\nmodel = Wav2Vec2Processor.from_pretrained(\"facebook/wav2vec2-base-960h\")\r\n\r\nsig, sr = torchaudio.load('out.mp3') # Make sure you have linux and ffmpeg 4 or use wav format + soundfile/librosa\r\n# preprocess, this is computed in prefetch don't care what time will it take...(in my pipeline)\r\ninput_values = processor(sig[0], sampling_rate=16_000, return_tensors=\"pt\").input_values \r\n\r\n%%timeit # jupyter magic or you could use time \r\nlogits = model(input_values).logits\r\n```\r\n\r\nnow whats interesting is that, with flax inference GPU utilisation is jumpy from 0-20% there might be some problem\r\nin memory allocation on cuda idk... \r\n\r\n\r\nTried this: \r\n\r\n```python3\r\n@jax.jit\r\ndef flax_model_jitted(input_values):\r\n return model(input_values).logits\r\n```\r\n\r\nseems like jit expects known type for flax so, added something like this as well `input_values = numpy.array(input_values)`\r\nin this case GPU was not used. On CPU speed up definitely is present. \r\n\r\nI installed cuda, cudnn, flax and jax with following way:\r\n```bash\r\nconda install -c conda-forge cudatoolkit-dev=11.2 cudnn=8.2.0\r\npip install -U jax[cuda11_cudnn82] -f https://storage.googleapis.com/jax-releases/jax_cuda_releases.html\r\n```\r\n\r\nP.s What do you guys think about [custom forward written in cuda](https://pytorch.org/tutorials/advanced/cpp_extension.html#custom-c-and-cuda-extensions) vs flax assuming it(flax) performs to its peak?",
"Oops accidentally closed an issue sorry guys ",
"> in this case GPU was not used\r\n\r\nYou can verify that you're running on an accelerator device by checking the number of JAX devices:\r\n```python\r\nprint(jax.device_count())\r\n```\r\nThis will tell you if you're on CPU or GPU!\r\n\r\n> On CPU speed up definitely is present.\r\n\r\nDid you make sure to use `.block_until_ready()` on the output logits? https://jax.readthedocs.io/en/latest/async_dispatch.html\r\n\r\nPerhaps you could post your full code snippet for the JIT benchmark!\r\n\r\nI'd do something as follows:\r\n\r\n```python\r\n@jax.jit\r\ndef flax_model_jitted(input_values):\r\n return model(input_values).logits\r\n\r\ninput_values = jnp.array(input_values)\r\n```\r\n\r\n```python\r\n# Compilation time (should be ~s)\r\n%time logits = flax_model_jitted(input_values=input_values).block_until_ready()\r\n```\r\n\r\n```python\r\n# Compiled time (should be ~ms)\r\n%time logits = flax_model_jitted(input_values=input_values).block_until_ready()\r\n```\r\n\r\nYou can refer to the ipynb for a template on how to set up a performance test: https://github.com/sanchit-gandhi/codesnippets/blob/main/check_flax_bloom_jit_small_testing.ipynb\r\n",
"```python3\r\nimport jax\r\n\r\n# This prints 1\r\nprint(jax.device_count(backend='gpu'))\r\n```\r\nUnfortunately, GPU utilisation is still 0% which means inference is still done on CPU. Memory is definitely allocated when model is loaded but after that, nothing really happens on it.\r\nCurrently flax benchmark looks like this:\r\n\r\n```python3\r\nfrom transformers import Wav2Vec2Processor, FlaxWav2Vec2ForCTC\r\nimport torch\r\nimport torchaudio\r\nimport jax\r\nfrom jax import numpy\r\n\r\nprint(jax.device_count(backend='gpu')) # this prints 1\r\n\r\nprocessor = Wav2Vec2Processor.from_pretrained(\"facebook/wav2vec2-base-960h\")\r\nmodel = FlaxWav2Vec2ForCTC.from_pretrained(\"facebook/wav2vec2-base-960h\", from_pt=True)\r\n\r\nsig, sr = torchaudio.load('out.mp3')\r\ninput_values = processor(sig[0], sampling_rate=16_000, return_tensors=\"pt\").input_values \r\ninput_values = numpy.array(input_values)\r\n\r\n@jax.jit\r\ndef flax_model_jitted(input_values):\r\n return model(input_values).logits\r\n \r\n%%timeit\r\nlogits = flax_model_jitted(input_values=input_values).block_until_ready()\r\n# 90.8 ms ± 41 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)\r\n\r\n%%timeit\r\nlogits = flax_model_jitted(input_values=input_values).block_until_ready() \r\n# 62.8 ms ± 2.4 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)\r\n```\r\n\r\nIs something wrong with `jax.numpy.array`? Do I need to somehow force things into GPU? \r\n`print(jax.device_count(backend='gpu')) # this prints 1` is this what I should be expecting for GPU usage?\r\n",
"@sanchit-gandhi sorry for pinging, but any thoughts on what could be the reasoning for such weird results?",
"Hey @ZurabDz! Sorry for the late reply. It looks like JAX is recognising your GPU which is good! The problem likely lies in your preparation of the inputs. First, what I'd try is returning the input values as np arrays:\r\n```python\r\nimport jax.numpy as jnp\r\n\r\nsig, sr = torchaudio.load(\"out.mp3\")\r\ninput_values = processor(sig[0], sampling_rate=16_000, return_tensors=\"np\").input_values \r\ninput_values_jnp = jnp.array(input_values)\r\n```\r\nand then pass these to the model.\r\n\r\nIf that does not help, then you can try using `device_put()` as explained in [multiplying-matrices](https://jax.readthedocs.io/en/latest/notebooks/quickstart.html#multiplying-matrices).",
"Sorry, currently I am unable to test ```device_put``` I am occupied with a different problem. Maybe we should close an issue and open it later if the problem persists.",
"Hey @ZurabDz! Sure, let's close it for now and re-open if you continue to encounter this problem. Feel free to open a new issue for the different problem you are facing and tag me!"
] | 1,663
| 1,666
| 1,666
|
NONE
| null |
So my question is, should FlaxWav2Vec2ForCTC generally be faster than Wav2Vec2ForCTC?
1.14 s ± 138 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) -> FlaxWav2Vec2ForCTC
37.7 ms ± 10.1 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) -> Wav2Vec2ForCTC
so the question is, should flax be faster than the default torch model?
P.S: benchmarks are done on GPU, it seems like VRAM usage is drastically larger on flax for some reason as well.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19108/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19108/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19107
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19107/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19107/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19107/events
|
https://github.com/huggingface/transformers/pull/19107
| 1,378,086,831
|
PR_kwDOCUB6oc4_NMYX
| 19,107
|
Add post_process_semantic_segmentation method to DPTFeatureExtractor
|
{
"login": "alaradirik",
"id": 8944735,
"node_id": "MDQ6VXNlcjg5NDQ3MzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/8944735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alaradirik",
"html_url": "https://github.com/alaradirik",
"followers_url": "https://api.github.com/users/alaradirik/followers",
"following_url": "https://api.github.com/users/alaradirik/following{/other_user}",
"gists_url": "https://api.github.com/users/alaradirik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alaradirik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alaradirik/subscriptions",
"organizations_url": "https://api.github.com/users/alaradirik/orgs",
"repos_url": "https://api.github.com/users/alaradirik/repos",
"events_url": "https://api.github.com/users/alaradirik/events{/privacy}",
"received_events_url": "https://api.github.com/users/alaradirik/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,663
| 1,663
| 1,663
|
CONTRIBUTOR
| null |
# What does this PR do?
Adds post_process_semantic_segmentation method to DPTFeatureExtractor.
I will open an issue and separate PRs to make sure that:
- Segmentation models (DETR, MaskFormer, SegFormer, etc.) have consistently named post-processing methods, arguments and outputs
- ImageSegmentationPipeline works with all available segmentation models
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19107/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19107/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19107",
"html_url": "https://github.com/huggingface/transformers/pull/19107",
"diff_url": "https://github.com/huggingface/transformers/pull/19107.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19107.patch",
"merged_at": 1663762527000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19106
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19106/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19106/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19106/events
|
https://github.com/huggingface/transformers/pull/19106
| 1,378,066,388
|
PR_kwDOCUB6oc4_NIHF
| 19,106
|
Michael branch
|
{
"login": "michaellin99999",
"id": 57974973,
"node_id": "MDQ6VXNlcjU3OTc0OTcz",
"avatar_url": "https://avatars.githubusercontent.com/u/57974973?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/michaellin99999",
"html_url": "https://github.com/michaellin99999",
"followers_url": "https://api.github.com/users/michaellin99999/followers",
"following_url": "https://api.github.com/users/michaellin99999/following{/other_user}",
"gists_url": "https://api.github.com/users/michaellin99999/gists{/gist_id}",
"starred_url": "https://api.github.com/users/michaellin99999/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/michaellin99999/subscriptions",
"organizations_url": "https://api.github.com/users/michaellin99999/orgs",
"repos_url": "https://api.github.com/users/michaellin99999/repos",
"events_url": "https://api.github.com/users/michaellin99999/events{/privacy}",
"received_events_url": "https://api.github.com/users/michaellin99999/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19106). All of your documentation changes will be reflected on that endpoint.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,663
| 1,666
| 1,666
|
NONE
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19106/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19106/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19106",
"html_url": "https://github.com/huggingface/transformers/pull/19106",
"diff_url": "https://github.com/huggingface/transformers/pull/19106.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19106.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/19105
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19105/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19105/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19105/events
|
https://github.com/huggingface/transformers/pull/19105
| 1,378,048,288
|
PR_kwDOCUB6oc4_NEVW
| 19,105
|
Add semantic segmentation post-processing method to MobileViT
|
{
"login": "alaradirik",
"id": 8944735,
"node_id": "MDQ6VXNlcjg5NDQ3MzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/8944735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alaradirik",
"html_url": "https://github.com/alaradirik",
"followers_url": "https://api.github.com/users/alaradirik/followers",
"following_url": "https://api.github.com/users/alaradirik/following{/other_user}",
"gists_url": "https://api.github.com/users/alaradirik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alaradirik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alaradirik/subscriptions",
"organizations_url": "https://api.github.com/users/alaradirik/orgs",
"repos_url": "https://api.github.com/users/alaradirik/repos",
"events_url": "https://api.github.com/users/alaradirik/events{/privacy}",
"received_events_url": "https://api.github.com/users/alaradirik/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,663
| 1,663
| 1,663
|
CONTRIBUTOR
| null |
# What does this PR do?
Adds post_process_semantic_segmentation method to `MobileViTFeatureExtractor`.
I will open an issue and separate PRs to make sure that
- Segmentation models (DETR, MaskFormer, SegFormer, etc.) have consistently named post-processing methods, arguments and outputs
- ImageSegmentationPipeline works with all available segmentation models
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19105/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19105/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19105",
"html_url": "https://github.com/huggingface/transformers/pull/19105",
"diff_url": "https://github.com/huggingface/transformers/pull/19105.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19105.patch",
"merged_at": 1663939469000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19104
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19104/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19104/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19104/events
|
https://github.com/huggingface/transformers/pull/19104
| 1,377,869,789
|
PR_kwDOCUB6oc4_MeLE
| 19,104
|
[wip: test doc-builder]
|
{
"login": "mishig25",
"id": 11827707,
"node_id": "MDQ6VXNlcjExODI3NzA3",
"avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mishig25",
"html_url": "https://github.com/mishig25",
"followers_url": "https://api.github.com/users/mishig25/followers",
"following_url": "https://api.github.com/users/mishig25/following{/other_user}",
"gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mishig25/subscriptions",
"organizations_url": "https://api.github.com/users/mishig25/orgs",
"repos_url": "https://api.github.com/users/mishig25/repos",
"events_url": "https://api.github.com/users/mishig25/events{/privacy}",
"received_events_url": "https://api.github.com/users/mishig25/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,663
| 1,663
| 1,663
|
CONTRIBUTOR
| null |
Testing https://github.com/huggingface/doc-builder/pull/296
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19104/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19104/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19104",
"html_url": "https://github.com/huggingface/transformers/pull/19104",
"diff_url": "https://github.com/huggingface/transformers/pull/19104.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19104.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/19103
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19103/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19103/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19103/events
|
https://github.com/huggingface/transformers/pull/19103
| 1,377,818,909
|
PR_kwDOCUB6oc4_MTJu
| 19,103
|
Improve vision models docs
|
{
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,663
| 1,663
| 1,663
|
CONTRIBUTOR
| null |
# What does this PR do?
Improves the docs of several vision models:
* for `xxxForMaskedImageModeling` models, add a link to the `run_mim.py` script
* for `ViTMAEForPreTraining`, add a link to the `run_mae.py` script
* for ViT, add a tip about interpolation of pre-trained position embeddings (in order to fine-tune on higher resolution images)
* add figures for ViT and BEiT
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19103/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19103/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19103",
"html_url": "https://github.com/huggingface/transformers/pull/19103",
"diff_url": "https://github.com/huggingface/transformers/pull/19103.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19103.patch",
"merged_at": 1663608155000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19102
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19102/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19102/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19102/events
|
https://github.com/huggingface/transformers/pull/19102
| 1,377,774,978
|
PR_kwDOCUB6oc4_MJgL
| 19,102
|
TF: check embeddings range
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"(cc @ydshieh -- this fixes a large number of scheduled CI failures)",
"@gante Is this PR ready to merge? I guess so, but would like to wait your confirmation (or better for you to merge).",
"> @gante Is this PR ready to merge? I guess so, but would like to wait your confirmation (or better for you to merge).\r\n\r\n@ydshieh It was ready -- merged now :D "
] | 1,663
| 1,663
| 1,663
|
MEMBER
| null |
# What does this PR do?
Adds the same [check that was recently added to TFBart](https://github.com/huggingface/transformers/blob/ba7f2173cc578fe6d9f1cdb900d5af609f195cf6/src/transformers/models/bart/modeling_tf_bart.py#L751), which asserts that the inputs are within the embedding input range, in all models with token embeddings. As a reminder: TF doesn't enforce this check by default on `tf.gather`-dependent operations on GPU, returning a vector of `0.0` when out of bounds.
After this change, all `test_embeddings_out_of_bounds_raise_exception` tests pass (36 failures in the previous scheduled CI).
To simplify the review, there are 3 models you should check. All others are copy/paste from these.
1. Bert (Encoder)
2. GPT2 (Decoder)
3. Pegasus (Encoder-Decoder with `TFSharedEmbeddings` or `TFWrappedEmbeddings`. Encoder-Decoder models that only use the embeddings at the decoder, like Speech2Text, also follow the same code pattern)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19102/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19102/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19102",
"html_url": "https://github.com/huggingface/transformers/pull/19102",
"diff_url": "https://github.com/huggingface/transformers/pull/19102.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19102.patch",
"merged_at": 1663849312000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19101
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19101/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19101/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19101/events
|
https://github.com/huggingface/transformers/pull/19101
| 1,377,762,175
|
PR_kwDOCUB6oc4_MGtX
| 19,101
|
Fix push ci workflow file
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,663
| 1,663
| 1,663
|
COLLABORATOR
| null |
# What does this PR do?
#19054 breaks push CI due to a missing working dir in CI workflow file. Sorry.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19101/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19101/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19101",
"html_url": "https://github.com/huggingface/transformers/pull/19101",
"diff_url": "https://github.com/huggingface/transformers/pull/19101.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19101.patch",
"merged_at": 1663585764000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19100
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19100/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19100/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19100/events
|
https://github.com/huggingface/transformers/pull/19100
| 1,377,747,345
|
PR_kwDOCUB6oc4_MDfc
| 19,100
|
Revert "Check self-hosted runners are online"
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[] | 1,663
| 1,675
| 1,663
|
COLLABORATOR
| null |
Reverts huggingface/transformers#19054
Sorry, but I merged a PR that breaks push CI. Will try to fix it.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19100/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19100/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19100",
"html_url": "https://github.com/huggingface/transformers/pull/19100",
"diff_url": "https://github.com/huggingface/transformers/pull/19100.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19100.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/19099
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19099/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19099/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19099/events
|
https://github.com/huggingface/transformers/pull/19099
| 1,377,745,293
|
PR_kwDOCUB6oc4_MDCj
| 19,099
|
Beit postprocessing
|
{
"login": "alaradirik",
"id": 8944735,
"node_id": "MDQ6VXNlcjg5NDQ3MzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/8944735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alaradirik",
"html_url": "https://github.com/alaradirik",
"followers_url": "https://api.github.com/users/alaradirik/followers",
"following_url": "https://api.github.com/users/alaradirik/following{/other_user}",
"gists_url": "https://api.github.com/users/alaradirik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alaradirik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alaradirik/subscriptions",
"organizations_url": "https://api.github.com/users/alaradirik/orgs",
"repos_url": "https://api.github.com/users/alaradirik/repos",
"events_url": "https://api.github.com/users/alaradirik/events{/privacy}",
"received_events_url": "https://api.github.com/users/alaradirik/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,663
| 1,663
| 1,663
|
CONTRIBUTOR
| null |
# What does this PR do?
Adds a post-processing method to BeiTFeatureExtractor for semantic segmentation.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19099/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19099/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19099",
"html_url": "https://github.com/huggingface/transformers/pull/19099",
"diff_url": "https://github.com/huggingface/transformers/pull/19099.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19099.patch",
"merged_at": 1663659716000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19098
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19098/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19098/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19098/events
|
https://github.com/huggingface/transformers/issues/19098
| 1,377,707,002
|
I_kwDOCUB6oc5SHiP6
| 19,098
|
Some feature requests for the Trainer
|
{
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"For 1, just overwrite the output dir with `--overwrite_output_dir` to have a clean output dir for the beginning of training (when resuming there shouldn't be any error since the repo will be synced with the local folder). If we stop leveraging `Repository`, we lose the async pushes so basically training will be interrupted each time there is a push, until the push is finished.\r\n\r\nFor 2, pushes are synced with saves, so change your `save_strategy` to `\"steps\"` and set the `save_steps` to the value of your liking.",
"Thanks for clarifying, makes sense!"
] | 1,663
| 1,663
| 1,663
|
CONTRIBUTOR
| null |
### Feature request
2 feature requests for the HuggingFace Trainer:
* seems like the Trainer is currently using `huggingface_hub.Repository` when pushing a model to the hub. Would be great to update this to leverage the new HTTP methods, as currently I'm getting errors like the following:
```
OSError Traceback (most recent call last)
[<ipython-input-37-be354f2ba166>](https://localhost:8080/#) in <module>
----> 1 trainer.push_to_hub("nielsr/layoutxlm-xfund-fr-relation-extraction")
3 frames
[/usr/local/lib/python3.7/dist-packages/transformers/trainer.py](https://localhost:8080/#) in push_to_hub(self, commit_message, blocking, **kwargs)
3373 # it might fail.
3374 if not hasattr(self, "repo"):
-> 3375 self.init_git_repo()
3376
3377 if self.args.should_save:
[/usr/local/lib/python3.7/dist-packages/transformers/trainer.py](https://localhost:8080/#) in init_git_repo(self, at_init)
3255 clone_from=repo_name,
3256 use_auth_token=use_auth_token,
-> 3257 private=self.args.hub_private_repo,
3258 )
3259 except EnvironmentError:
[/usr/local/lib/python3.7/dist-packages/huggingface_hub/repository.py](https://localhost:8080/#) in __init__(self, local_dir, clone_from, repo_type, use_auth_token, git_user, git_email, revision, private, skip_lfs_files, client)
496
497 if clone_from is not None:
--> 498 self.clone_from(repo_url=clone_from)
499 else:
500 if is_git_repo(self.local_dir):
[/usr/local/lib/python3.7/dist-packages/huggingface_hub/repository.py](https://localhost:8080/#) in clone_from(self, repo_url, use_auth_token)
725 if not in_repository:
726 raise EnvironmentError(
--> 727 "Tried to clone a repository in a non-empty folder that isn't a"
728 " git repository. If you really want to do this, do it"
729 " manually:\ngit init && git remote add origin && git pull"
OSError: Tried to clone a repository in a non-empty folder that isn't a git repository. If you really want to do this, do it manually:
git init && git remote add origin && git pull origin main
or clone repo to a new folder and move your existing files there afterwards.
````
* would be great to have an argument `push_to_hub_frequency`, to indicate at which steps to push the model to the hub (seems like it's pushing to the hub every epoch at the moment by default).
### Motivation
Improving the push to hub functionalities of the Trainer.
### Your contribution
I hope @sgugger has the bandwidth to work on this :D
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19098/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19098/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19097
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19097/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19097/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19097/events
|
https://github.com/huggingface/transformers/pull/19097
| 1,377,614,382
|
PR_kwDOCUB6oc4_LmnN
| 19,097
|
fix position bias related logic after prune heads in T5 model
|
{
"login": "J-shang",
"id": 33053116,
"node_id": "MDQ6VXNlcjMzMDUzMTE2",
"avatar_url": "https://avatars.githubusercontent.com/u/33053116?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/J-shang",
"html_url": "https://github.com/J-shang",
"followers_url": "https://api.github.com/users/J-shang/followers",
"following_url": "https://api.github.com/users/J-shang/following{/other_user}",
"gists_url": "https://api.github.com/users/J-shang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/J-shang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/J-shang/subscriptions",
"organizations_url": "https://api.github.com/users/J-shang/orgs",
"repos_url": "https://api.github.com/users/J-shang/repos",
"events_url": "https://api.github.com/users/J-shang/events{/privacy}",
"received_events_url": "https://api.github.com/users/J-shang/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19097). All of your documentation changes will be reflected on that endpoint.",
"Maybe of interest to @ArthurZucker :)",
"Hey, before diving a bit deeper, sorry for the long delay, and thanks for the PR. \r\nWould you mind adding a test? 🤗 I can take care of it otherwise! ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Closing in favor of #20106. Thanks for your contribution"
] | 1,663
| 1,668
| 1,668
|
CONTRIBUTOR
| null |
# What does this PR do?
A follow up pr after #17968
If the attention layer `self.has_relative_attention_bias == False`, then the position bias shape will be wrong, the head number should be the original model (before prune heads) head number `self.n_heads + len(self.pruned_heads)`
## Who can review?
- t5: @patrickvonplaten @patil-suraj
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19097/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19097/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19097",
"html_url": "https://github.com/huggingface/transformers/pull/19097",
"diff_url": "https://github.com/huggingface/transformers/pull/19097.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19097.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/19096
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19096/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19096/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19096/events
|
https://github.com/huggingface/transformers/pull/19096
| 1,377,287,618
|
PR_kwDOCUB6oc4_KghS
| 19,096
|
HPO: keep the original logic if there's only one process, pass the tr…
|
{
"login": "sywangyi",
"id": 36058628,
"node_id": "MDQ6VXNlcjM2MDU4NjI4",
"avatar_url": "https://avatars.githubusercontent.com/u/36058628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sywangyi",
"html_url": "https://github.com/sywangyi",
"followers_url": "https://api.github.com/users/sywangyi/followers",
"following_url": "https://api.github.com/users/sywangyi/following{/other_user}",
"gists_url": "https://api.github.com/users/sywangyi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sywangyi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sywangyi/subscriptions",
"organizations_url": "https://api.github.com/users/sywangyi/orgs",
"repos_url": "https://api.github.com/users/sywangyi/repos",
"events_url": "https://api.github.com/users/sywangyi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sywangyi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"@yao-matrix @sgugger please review the patch.",
"_The documentation is not available anymore as the PR was closed or merged._",
" need to find out solution for following cases\r\n *if we need to use trial in model_init, how to do it for non-main rank, sync the model with rank0 in app?\r\n *how to use optuna prune feature for DDP, if we do it in rank0, how does other rank know it."
] | 1,663
| 1,666
| 1,663
|
CONTRIBUTOR
| null |
…ial to trainer
need to find out solution for following cases
*if we need to use trial in model_init, how to do it for non-main rank, sync the model with rank0 in app?
*how to use optuna prune feature for DDP, if we do it in rank0, how does other rank know it.
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
- trainer: @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19096/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19096/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19096",
"html_url": "https://github.com/huggingface/transformers/pull/19096",
"diff_url": "https://github.com/huggingface/transformers/pull/19096.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19096.patch",
"merged_at": 1663620138000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19095
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19095/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19095/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19095/events
|
https://github.com/huggingface/transformers/issues/19095
| 1,377,203,131
|
I_kwDOCUB6oc5SFnO7
| 19,095
|
Activation checkpointing for TFGPT2DoubleHeadsModel
|
{
"login": "visionscaper",
"id": 1189068,
"node_id": "MDQ6VXNlcjExODkwNjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/1189068?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/visionscaper",
"html_url": "https://github.com/visionscaper",
"followers_url": "https://api.github.com/users/visionscaper/followers",
"following_url": "https://api.github.com/users/visionscaper/following{/other_user}",
"gists_url": "https://api.github.com/users/visionscaper/gists{/gist_id}",
"starred_url": "https://api.github.com/users/visionscaper/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/visionscaper/subscriptions",
"organizations_url": "https://api.github.com/users/visionscaper/orgs",
"repos_url": "https://api.github.com/users/visionscaper/repos",
"events_url": "https://api.github.com/users/visionscaper/events{/privacy}",
"received_events_url": "https://api.github.com/users/visionscaper/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @Rocketknight1 @gante",
"Hi @visionscaper - this seems like something that could work! We haven't experimented with `tf.recompute_grad` in detail but the core code for training our Keras models is in the `train_step` and `test_step` methods [here](https://github.com/huggingface/transformers/blob/main/src/transformers/modeling_tf_utils.py#L1380), so you could try adding it there. Anything you add there will control `model.fit()` for all of our models.\r\n\r\nBear in mind I don't know if this will work - I don't know the exact semantics of `recompute_grad` or if it plays nicely with Graph mode, but if you discover anything or you have any questions feel free to post them here!\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,663
| 1,666
| 1,666
|
NONE
| null |
### Feature request
Activation checkpointing is implemented for the PyTorch GPT2 model (and different head variants), however, this is not the case for the Tensorflow implementation of GPT2,
### Motivation
A lot of GPU memory is required to finetune GPT2. This is especially the case for TFGPT2DoubleHeadsModel, because different choices (represented by different sequences) are combined in one sample. I think [`tf.recompte_grad`](https://www.tensorflow.org/api_docs/python/tf/recompute_grad) can play a role here.
### Your contribution
I've more experience with PyTorch then Tensorflow, but I could investigate possible solution directions. If it turns out to be easy I could spent time to create a PR, however, help by others is appreciated.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19095/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19095/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19094
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19094/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19094/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19094/events
|
https://github.com/huggingface/transformers/issues/19094
| 1,377,144,838
|
I_kwDOCUB6oc5SFZAG
| 19,094
|
Allow custom signature while saving TF models
|
{
"login": "dimitreOliveira",
"id": 16668746,
"node_id": "MDQ6VXNlcjE2NjY4NzQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/16668746?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dimitreOliveira",
"html_url": "https://github.com/dimitreOliveira",
"followers_url": "https://api.github.com/users/dimitreOliveira/followers",
"following_url": "https://api.github.com/users/dimitreOliveira/following{/other_user}",
"gists_url": "https://api.github.com/users/dimitreOliveira/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dimitreOliveira/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dimitreOliveira/subscriptions",
"organizations_url": "https://api.github.com/users/dimitreOliveira/orgs",
"repos_url": "https://api.github.com/users/dimitreOliveira/repos",
"events_url": "https://api.github.com/users/dimitreOliveira/events{/privacy}",
"received_events_url": "https://api.github.com/users/dimitreOliveira/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @Rocketknight1 @gante",
"Hi @dimitreOliveira, that sounds like a great feature, and we'd be happy to accept that PR! We've been working on making our default signatures more general and usable, but this sounds like a good idea too. Are you planning to add a `signatures` argument that's passed through to `model.save()` when `saved_model=True`?",
"Hey @Rocketknight1 I am glad you liked the feature, I am happy to collaborate with the TF side of the lib.\r\n\r\nYes, my idea is to just add a `signature` parameters to the [signatures](https://github.com/huggingface/transformers/blob/ca485e562b675341409e3e27724072fb11e10af7/src/transformers/modeling_tf_utils.py#L2085) function, that parameter would default to `None` and if that is the case we would just use `self.serving` as we already do, this way there would not be any relevant side-effect, and users could just create their custom signatures and pass it while saving. Looking at the code design it seems that this change would be compatible with all TF transformers models ; )\r\nI have not looked yet to see if that would generate any issues with the tests, but if the plan is good I will work on the code during the weekend.\r\n\r\nFor context, the idea for this feature came to me while I was working on [this repository](https://github.com/dimitreOliveira/hf_tf_serving_examples), that also have a collection of custom signatures that range from text classification to text generation.\r\nMaybe this feature also works for the vision and speech models but I do not have a lot of experience with those, maybe later I could also take a look there.",
"@Rocketknight1 @gante you can find the draft PR above, let me know if it looks good, then I can finish the work, if needed, I can provide some examples of cool use cases using custom signatures with the models."
] | 1,663
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
### Feature request
Currently, when we use the [save_pretrained](https://github.com/huggingface/transformers/blob/ca485e562b675341409e3e27724072fb11e10af7/src/transformers/modeling_tf_utils.py#L2085) function from this library the model signature used to save the model is the default one that only calls the model on the inputs, I would like to be able to provide a custom signature while using the `save_pretrained` function.
### Motivation
Persisting models with custom signatures is quite important for models that target production setups, especially if they are going to be served with TF Serving.
I might be wrong but it seems that currently, the only way to save a `Transformer` model with a custom signature is by saving it using functions from the TF library, it would be very nice if the HF ecosystem could also support this feature.
### Your contribution
I think this might be simple to implement and I would be happy to draft a PR if you think this could be a helpful feature.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19094/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19094/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19093
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19093/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19093/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19093/events
|
https://github.com/huggingface/transformers/issues/19093
| 1,377,124,264
|
I_kwDOCUB6oc5SFT-o
| 19,093
|
Add BPE Wav2Vec2CTCTokenizer
|
{
"login": "OllieBroadhurst",
"id": 46894149,
"node_id": "MDQ6VXNlcjQ2ODk0MTQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/46894149?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/OllieBroadhurst",
"html_url": "https://github.com/OllieBroadhurst",
"followers_url": "https://api.github.com/users/OllieBroadhurst/followers",
"following_url": "https://api.github.com/users/OllieBroadhurst/following{/other_user}",
"gists_url": "https://api.github.com/users/OllieBroadhurst/gists{/gist_id}",
"starred_url": "https://api.github.com/users/OllieBroadhurst/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/OllieBroadhurst/subscriptions",
"organizations_url": "https://api.github.com/users/OllieBroadhurst/orgs",
"repos_url": "https://api.github.com/users/OllieBroadhurst/repos",
"events_url": "https://api.github.com/users/OllieBroadhurst/events{/privacy}",
"received_events_url": "https://api.github.com/users/OllieBroadhurst/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @SaulLu @ArthurZucker ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,663
| 1,666
| 1,666
|
CONTRIBUTOR
| null |
### Feature request
Hi there!
Is there scope for a BPE (SentencePiece) CTC tokenizer? Using a trained SentencePiece vocabulary in a CTC model is pretty straightforward - all we need now is a tokenizer than can bridge the missing step between grouping CTC ids and decoding them with SentencePiece.
### Motivation
Grouping characters that occur commonly together is a cool way of sometimes helping with spelling mistakes (like a mini LM) and introduces dependence between characters which can help hold the model's hand.
As far as I know this is fully supported by pyctcdecode so no pipelines/LM processors, etc. need to change.
### Your contribution
The few changes would be something like
```python
def _tokenize(self, text):
return self.sp_model.encode(text, out_type=str)
```
```python
def _convert_token_to_id(self, token):
spm_id = self.sp_model.PieceToId(token)
return spm_id
```
Additional SentenciePiece args would be similar to [other SentencePiece tokenizers](https://huggingface.co/docs/transformers/model_doc/xlm-roberta#transformers.XLMRobertaTokenizer), as would be the [saving of the tokenizer](https://github.com/huggingface/transformers/blob/2c8b508ccabea6638aa463a137852ff3b64be036/src/transformers/models/deberta_v2/tokenization_deberta_v2.py#L474).
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19093/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19093/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19092
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19092/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19092/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19092/events
|
https://github.com/huggingface/transformers/pull/19092
| 1,377,020,439
|
PR_kwDOCUB6oc4_July
| 19,092
|
correct spelling in README
|
{
"login": "flozi00",
"id": 47894090,
"node_id": "MDQ6VXNlcjQ3ODk0MDkw",
"avatar_url": "https://avatars.githubusercontent.com/u/47894090?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/flozi00",
"html_url": "https://github.com/flozi00",
"followers_url": "https://api.github.com/users/flozi00/followers",
"following_url": "https://api.github.com/users/flozi00/following{/other_user}",
"gists_url": "https://api.github.com/users/flozi00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/flozi00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/flozi00/subscriptions",
"organizations_url": "https://api.github.com/users/flozi00/orgs",
"repos_url": "https://api.github.com/users/flozi00/repos",
"events_url": "https://api.github.com/users/flozi00/events{/privacy}",
"received_events_url": "https://api.github.com/users/flozi00/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,663
| 1,663
| 1,663
|
CONTRIBUTOR
| null |
fixes to typos / spellings
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19092/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19092/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19092",
"html_url": "https://github.com/huggingface/transformers/pull/19092",
"diff_url": "https://github.com/huggingface/transformers/pull/19092.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19092.patch",
"merged_at": 1663609904000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19091
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19091/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19091/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19091/events
|
https://github.com/huggingface/transformers/issues/19091
| 1,376,954,629
|
I_kwDOCUB6oc5SEqkF
| 19,091
|
TypeError: __init__() got an unexpected keyword argument 'has_model_config'
|
{
"login": "pratikchhapolika",
"id": 11159549,
"node_id": "MDQ6VXNlcjExMTU5NTQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/11159549?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pratikchhapolika",
"html_url": "https://github.com/pratikchhapolika",
"followers_url": "https://api.github.com/users/pratikchhapolika/followers",
"following_url": "https://api.github.com/users/pratikchhapolika/following{/other_user}",
"gists_url": "https://api.github.com/users/pratikchhapolika/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pratikchhapolika/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pratikchhapolika/subscriptions",
"organizations_url": "https://api.github.com/users/pratikchhapolika/orgs",
"repos_url": "https://api.github.com/users/pratikchhapolika/repos",
"events_url": "https://api.github.com/users/pratikchhapolika/events{/privacy}",
"received_events_url": "https://api.github.com/users/pratikchhapolika/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,663
| 1,666
| 1,666
|
NONE
| null |
### System Info
- transformers version: 4.18.1
- Platform: Linux Jupyter Notebook, TF2.3 Python 3.6, 2 GPU
- Python version: '1.7.1+cu101'
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
@mf
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
model_checkpoint = "xlm-roberta-large-finetuned-conll03-english"
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint,add_prefix_space=True)
```
`train_examples ={'texts':[x[0] for x in train_set],'tag_names':[x[1] for x in train_set]}`
```
def isin(a, b):
return a[1] > b[0] and a[0] < b[1]
```
```
def tokenize_and_align_labels(examples, label2id, max_length=256):
tokenized_inputs = tokenizer(examples["texts"], truncation=True, padding='max_length', max_length=max_length,
return_offsets_mapping=True)
labels = []
for i, label_idx_for_single_input in enumerate(tqdm.tqdm(examples["tag_names"])):
labels_for_single_input = ['O' for _ in range(max_length)]
text_offsets = tokenized_inputs['offset_mapping'][i]
for entity in label_idx_for_single_input:
tag = entity['tag']
tag_offset = [entity['start'], entity['end']]
affected_token_ids = [j for j in range(max_length) if isin(tag_offset, text_offsets[j])]
if len(affected_token_ids) < 1:
continue
if any(labels_for_single_input[j] != 'O' for j in affected_token_ids):
continue
for j in affected_token_ids:
labels_for_single_input[j] = 'I_' + tag
labels_for_single_input[affected_token_ids[-1]] = 'L_' + tag
labels_for_single_input[affected_token_ids[0]] = 'B_' + tag
label_ids = [label2id[x] for x in labels_for_single_input]
labels.append(label_ids)
tokenized_inputs["labels"] = labels
print(tokenized_inputs.keys())
return tokenized_inputs
```
```
class MyDataset(torch.utils.data.Dataset):
def __init__(self, examples):
self.encodings = examples
self.labels = examples['labels']
def __getitem__(self, idx):
item = {k: torch.tensor(v[idx]) for k, v in self.encodings.items()}
item["labels"] = torch.tensor([self.labels[idx]])
return item
def __len__(self):
return len(self.labels)
train_data=MyDataset(train_data)
```
```
model = AutoModelForTokenClassification.from_pretrained(model_checkpoint,id2label=id2label,label2id=label2id,ignore_mismatched_sizes=True)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
```
```
args = TrainingArguments(
"xlmroberta-finetuned-ner",
# evaluation_strategy="epoch",
save_strategy="epoch",
learning_rate=2e-5,
num_train_epochs=2,
weight_decay=0.01,
per_device_train_batch_size=4,
# per_device_eval_batch_size=32
fp16=True
# bf16=True #Ampere GPU
)
```
```
trainer = Trainer(
model=model,
args=args,
train_dataset=train_data,
# eval_dataset=train_data,
# data_collator=data_collator,
# compute_metrics=compute_metrics,
tokenizer=tokenizer)
trainer.train()
```
```
Using amp half precision backend
FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning
FutureWarning,
***** Running training *****
Num examples = 141648
Num Epochs = 2
Instantaneous batch size per device = 4
Total train batch size (w. parallel, distributed & accumulation) = 8
Gradient Accumulation steps = 1
Total optimization steps = 35412
MLflow's log_param() only accepts values no longer than 250 characters so we dropped this attribute.
TypeError: __init__() got an unexpected keyword argument 'has_model_config'
```
### Expected behavior
To train NER model
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19091/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19091/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19090
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19090/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19090/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19090/events
|
https://github.com/huggingface/transformers/issues/19090
| 1,376,863,637
|
I_kwDOCUB6oc5SEUWV
| 19,090
|
[Tracker] [bnb] Supporting `device_map` containing GPU and CPU devices
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
] |
[
"UPDATE (for future readers): the title was changed.\r\n\r\n---\r\n\r\nI think that the title of this issue is a little bit misleading. Technically, a custom `device_map` is already supported for `bitsandbytes`, as long as all the layers are on GPU.\r\n\r\nFor example, in the linked issue, this `device_map` works correctly:\r\n```python\r\n device_map = {\r\n \"transformer.wte\": 0,\r\n \"transformer.wpe\": 0,\r\n \"transformer.ln_f\": 0,\r\n \"lm_head\": 0,\r\n \"transformer.h.0\": 0,\r\n \"transformer.h.1\": 0,\r\n \"transformer.h.2\": 0,\r\n \"transformer.h.3\": 0,\r\n \"transformer.h.4\": 0,\r\n \"transformer.h.5\": 0,\r\n \"transformer.h.6\": 0,\r\n \"transformer.h.7\": 0,\r\n \"transformer.h.8\": 0,\r\n \"transformer.h.9\": 0,\r\n \"transformer.h.10\": 0,\r\n \"transformer.h.11\": 0\r\n }\r\n```\r\n\r\nAnd I believe that there will be no problem in using `1` instead of `0` for any `transformer.*` layer if you have more than one GPU (but I may be mistaken, I didn't find any specific info in any docs about using `bitsandbytes` with multiple GPUs). And I suppose that replacing all `0` with `1` will also work. So, I think that users already can customize the device map, as long as it doesn't put anything on CPU.\r\n\r\nThe original issue was not about a custom map. It was about supporting the `load_in_8bit` flag for models that are shared between CPU and GPU.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"> If you think this still needs to be addressed please comment on this thread.\r\n\r\nunstale\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"> If you think this still needs to be addressed please comment on this thread.\r\n\r\nunstale\r\n\r\nI guess this will be my monthly routine...",
"Hi\r\nThe PR #20281 will not be merged until a fix will be found on `bitsandbytes` side. \r\nCould you please checkout from this PR if you want to use this feature from now? Thanks.",
"I've just tested that PR and it works. Thank you!\r\n\r\nI tested it with a 13B model on GTX 3060. Without `load_in_8bit` only 10 layers are able to fit into the GPU. With that patch and `load_in_8bit=True` now 19 layers are able to fit into the GPU. Which gives a 30% speedup of the inference in my case.\r\n\r\nFor some reason when I test it on my initial example, it gives this warning:\r\n\r\n```\r\n/home/user/test/bnb-test/transformers/src/transformers/generation/utils.py:1470: UserWarning: You are calling .generate() with the `input_ids` being on a device type different than your model's device. `input_ids` is on cpu, whereas the model is on cuda. You may experience unexpected behaviors or slower generation. Please make sure that you have put `input_ids` to the correct device by calling for example input_ids = input_ids.to('cuda') before running `.generate()`.\r\n warnings.warn(\r\n```\r\n\r\nHowever, I was not able to reproduce it in my other more complex program.\r\n\r\n\r\nIn the PR's discussion it was said:\r\n\r\n> this will result in weights offloaded on the CPU to not be converted in int8 at all\r\n\r\nI expected this much, but I think it's still better than nothing.\r\n\r\nThough, are there some gotchas in the fact that CPU layers are not converted to 8bit?\r\n\r\n\r\nAlso, not sure how to proceed next. You said:\r\n\r\n> we should probably wait until bitsandbytes supports weights offloading in 8-bit to add this feature\r\n\r\nSo I suppose this issue should remain open? I will then add more info to my initial issue at the `bitsandbytes` repo.",
"Thank you very much for your feedback and happy that it worked for your usecase!\r\n\r\n> For some reason when I test it on my initial example, it gives this warning:\r\n\r\nThis is because you have set your `input_ids` on the `cpu` before running your inference! Make sure to set `input_ids` to the device of the first layers (so I guess here, your GPU) before running `generate`.\r\n\r\n> Though, are there some gotchas in the fact that CPU layers are not converted to 8bit?\r\n\r\nI did not quite get your question here, but CPU layers are kept in their native `dtype` here indeed, which can be quite confusing. For example you could provide a device_map that contains only `cpu` layers and still load your model with `load_in_8bit` - users will think that they're loading their model in 8-bit on their CPU when actually it's not the case.\r\n\r\n> So I suppose this issue should remain open? I will then add more info to my initial issue at the bitsandbytes repo.\r\n\r\nYes, it can remain open. But feel free also to jump in the PR #20281 to give your opinion on the question and stress about the fact that you think this feature is useful. You can also add more information on the `bitsandbytes` repo also! \r\n",
"> This is because you have set your `input_ids` on the `cpu` before running your inference! Make sure to set `input_ids` to the device of the first layers (so I guess here, your GPU) before running `generate`.\r\n\r\nI use the following code:\r\n```python\r\npipe = pipeline(\r\n model=\"EleutherAI/gpt-neo-125M\",\r\n max_length=32,\r\n model_kwargs={\r\n \"device_map\": device_map,\r\n \"load_in_8bit\": load_in_8bit\r\n }\r\n)\r\n\r\nprint(\"\\n\", pipe(\"It was\")[0][\"generated_text\"])\r\n```\r\nNot sure where I am supposed to set `input_ids` here.\r\n\r\n\r\n> I did not quite get your question here\r\n\r\nI mean, purely from a technical standpoint, are there some downsides to mixing 8bit and 16/32bit layers?\r\n",
"> Not sure where I am supposed to set input_ids here.\r\n\r\nThanks for sharing the code! It's clearer for me now, can you try to add `device=0` as follows:\r\n```\r\npipe = pipeline(\r\n model=\"EleutherAI/gpt-neo-125M\",\r\n max_length=32,\r\n device=0,\r\n model_kwargs={\r\n \"device_map\": device_map,\r\n \"load_in_8bit\": load_in_8bit\r\n }\r\n\r\n)\r\n```\r\n\r\n> I mean, purely from a technical standpoint, are there some downsides to mixing 8bit and 16/32bit layers?\r\n\r\nIndeed, from a technical standpoint I don't see any downside \r\n",
"When I add `device=0` I get this:\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/user/test/bnb-test/main.py\", line 28, in <module>\r\n pipe = pipeline(\r\n File \"/home/user/test/bnb-test/transformers/src/transformers/pipelines/__init__.py\", line 870, in pipeline\r\n return pipeline_class(model=model, framework=framework, task=task, **kwargs)\r\n File \"/home/user/test/bnb-test/transformers/src/transformers/pipelines/text_generation.py\", line 64, in __init__\r\n super().__init__(*args, **kwargs)\r\n File \"/home/user/test/bnb-test/transformers/src/transformers/pipelines/base.py\", line 778, in __init__\r\n self.model = self.model.to(self.device)\r\n File \"/home/user/test/bnb-test/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 987, in to\r\n return self._apply(convert)\r\n File \"/home/user/test/bnb-test/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 639, in _apply\r\n module._apply(fn)\r\n File \"/home/user/test/bnb-test/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 639, in _apply\r\n module._apply(fn)\r\n File \"/home/user/test/bnb-test/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 639, in _apply\r\n module._apply(fn)\r\n [Previous line repeated 1 more time]\r\n File \"/home/user/test/bnb-test/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 662, in _apply\r\n param_applied = fn(param)\r\n File \"/home/user/test/bnb-test/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 985, in convert\r\n return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)\r\nNotImplementedError: Cannot copy out of meta tensor; no data!\r\n```\r\n\r\nThe full code for clarity:\r\n```python\r\nfrom transformers import pipeline\r\n\r\nauto_map = False\r\nload_in_8bit = True\r\n\r\nif auto_map:\r\n device_map = \"auto\"\r\nelse:\r\n device_map = {\r\n \"transformer.wte\": 0,\r\n \"transformer.wpe\": 0,\r\n \"transformer.ln_f\": \"cpu\",\r\n \"lm_head\": 0,\r\n \"transformer.h.0\": 0,\r\n \"transformer.h.1\": \"cpu\",\r\n \"transformer.h.2\": \"cpu\",\r\n \"transformer.h.3\": \"cpu\",\r\n \"transformer.h.4\": \"cpu\",\r\n \"transformer.h.5\": \"cpu\",\r\n \"transformer.h.6\": \"cpu\",\r\n \"transformer.h.7\": \"cpu\",\r\n \"transformer.h.8\": \"cpu\",\r\n \"transformer.h.9\": \"cpu\",\r\n \"transformer.h.10\": \"cpu\",\r\n \"transformer.h.11\": \"cpu\"\r\n }\r\n\r\npipe = pipeline(\r\n model=\"EleutherAI/gpt-neo-125M\",\r\n device=0,\r\n max_length=32,\r\n model_kwargs={\r\n \"device_map\": device_map,\r\n \"load_in_8bit\": load_in_8bit\r\n }\r\n)\r\n\r\nprint(\"\\n\", pipe(\"It was\")[0][\"generated_text\"])\r\n```\r\n\r\nThe error occurs even when `load_in_8bit = False`.\r\n\r\nAlso, in any case, the original error is pretty confusing. It says `You are calling .generate() with the input_ids`, but I don't do such a thing.",
"Thanks for sharing, I think it is fine, for now I would say that you can leave the pipeline without `device=0`. I expect a small speedup since `accelerate` copies the `input_ids` that is created on the `cpu` to the device of the model at the beginning, and copies back the result on `cpu`. Let me get back to you on this to see if I can find a solution\r\n\r\nthe reason it says `generate()` is because `pipeline` calls `.generate()` under the hood here",
"> the reason it says `generate()` is because `pipeline` calls `.generate()` under the hood here\r\n\r\nI know, but to an end user it still will not be immediately clear what the problem is just by reading that error message. It also says how to fix it:\r\n```\r\nPlease make sure that you have put input_ids to the correct device\r\nby calling for example input_ids = input_ids.to('cuda') before running .generate()\r\n```\r\nBut it's absolutely not applicable in this situation, adding even more confusion. Maybe the call to `pipeline` should have a different error message?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"unstale\r\n\r\nAlso, I added some comments in the PR discussion:\r\nhttps://github.com/huggingface/transformers/pull/20281#issuecomment-1328092770\r\nhttps://github.com/huggingface/transformers/pull/20281#issuecomment-1345605654",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"unstale\r\n\r\nTechnically, I personally don't need this fix anymore, since in my project I applied the hack described in the PR.\r\nThough it would be nice to have it properly integrated into the `transformers`.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"This should be solved by the introduction of `BitsAndBytesConfig` in #21579 ",
"Yes, indeed it works. Thank you, @younesbelkada!\r\n\r\nFor completeness sake, here's the final working version:\r\n\r\n```python\r\nimport torch\r\nfrom transformers import BitsAndBytesConfig, pipeline\r\n\r\ndevice_map = {\r\n \"transformer.wte\": 0,\r\n \"transformer.wpe\": 0,\r\n \"transformer.ln_f\": \"cpu\",\r\n \"lm_head\": 0,\r\n \"transformer.h.0\": 0,\r\n \"transformer.h.1\": \"cpu\",\r\n \"transformer.h.2\": \"cpu\",\r\n \"transformer.h.3\": \"cpu\",\r\n \"transformer.h.4\": \"cpu\",\r\n \"transformer.h.5\": \"cpu\",\r\n \"transformer.h.6\": \"cpu\",\r\n \"transformer.h.7\": \"cpu\",\r\n \"transformer.h.8\": \"cpu\",\r\n \"transformer.h.9\": \"cpu\",\r\n \"transformer.h.10\": \"cpu\",\r\n \"transformer.h.11\": \"cpu\"\r\n}\r\n\r\n\r\nquantization_config = BitsAndBytesConfig(\r\n load_in_8bit=True,\r\n llm_int8_enable_fp32_cpu_offload=True,\r\n llm_int8_skip_modules=[\"lm_head\"]\r\n)\r\n\r\npipe = pipeline(\r\n model=\"EleutherAI/gpt-neo-125M\",\r\n max_length=32,\r\n torch_dtype=torch.float16,\r\n model_kwargs={\r\n \"device_map\": device_map,\r\n \"quantization_config\": quantization_config\r\n }\r\n)\r\n\r\nprint(\"\\n\", pipe(\"It was\")[0][\"generated_text\"])\r\n```"
] | 1,663
| 1,677
| 1,677
|
CONTRIBUTOR
| null |
### Feature request
We should be able to provide custom `device_map` when using 8-bit models using `bitsandbytes`. This would enable users having more control over the modules they want to quantize.
Linked issue: https://github.com/TimDettmers/bitsandbytes/issues/40
### Motivation
Users should be able to pass their own custom `device_map` and chose which module should be quantized or not
### Your contribution
Try coding this enhancement!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19090/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19090/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19089
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19089/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19089/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19089/events
|
https://github.com/huggingface/transformers/pull/19089
| 1,376,840,734
|
PR_kwDOCUB6oc4_JNKU
| 19,089
|
Add type hints for TF MPNet models
|
{
"login": "kishore-s-15",
"id": 56688194,
"node_id": "MDQ6VXNlcjU2Njg4MTk0",
"avatar_url": "https://avatars.githubusercontent.com/u/56688194?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kishore-s-15",
"html_url": "https://github.com/kishore-s-15",
"followers_url": "https://api.github.com/users/kishore-s-15/followers",
"following_url": "https://api.github.com/users/kishore-s-15/following{/other_user}",
"gists_url": "https://api.github.com/users/kishore-s-15/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kishore-s-15/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kishore-s-15/subscriptions",
"organizations_url": "https://api.github.com/users/kishore-s-15/orgs",
"repos_url": "https://api.github.com/users/kishore-s-15/repos",
"events_url": "https://api.github.com/users/kishore-s-15/events{/privacy}",
"received_events_url": "https://api.github.com/users/kishore-s-15/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Thanks @Rocketknight1 😊.",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,663
| 1,663
| 1,663
|
CONTRIBUTOR
| null |
Based on Issue https://github.com/huggingface/transformers/issues/16059
I have added type hints for the all the Tensorflow MPNet models.
@Rocketknight1 Could you kindly check if this is fine?
Thanks in advance.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19089/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19089/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19089",
"html_url": "https://github.com/huggingface/transformers/pull/19089",
"diff_url": "https://github.com/huggingface/transformers/pull/19089.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19089.patch",
"merged_at": 1663591053000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19088
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19088/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19088/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19088/events
|
https://github.com/huggingface/transformers/pull/19088
| 1,376,829,389
|
PR_kwDOCUB6oc4_JLDF
| 19,088
|
Added type hints for TFConvBertModel
|
{
"login": "kishore-s-15",
"id": 56688194,
"node_id": "MDQ6VXNlcjU2Njg4MTk0",
"avatar_url": "https://avatars.githubusercontent.com/u/56688194?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kishore-s-15",
"html_url": "https://github.com/kishore-s-15",
"followers_url": "https://api.github.com/users/kishore-s-15/followers",
"following_url": "https://api.github.com/users/kishore-s-15/following{/other_user}",
"gists_url": "https://api.github.com/users/kishore-s-15/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kishore-s-15/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kishore-s-15/subscriptions",
"organizations_url": "https://api.github.com/users/kishore-s-15/orgs",
"repos_url": "https://api.github.com/users/kishore-s-15/repos",
"events_url": "https://api.github.com/users/kishore-s-15/events{/privacy}",
"received_events_url": "https://api.github.com/users/kishore-s-15/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks @Rocketknight1."
] | 1,663
| 1,663
| 1,663
|
CONTRIBUTOR
| null |
Based on Issue #16059
I have added type hints for the [TFConvBertModel](https://huggingface.co/docs/transformers/model_doc/convbert#transformers.TFConvBertModel).
@Rocketknight1 Could you kindly check if this is fine?
Thanks in advance.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19088/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19088/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19088",
"html_url": "https://github.com/huggingface/transformers/pull/19088",
"diff_url": "https://github.com/huggingface/transformers/pull/19088.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19088.patch",
"merged_at": 1663590493000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19087
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19087/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19087/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19087/events
|
https://github.com/huggingface/transformers/issues/19087
| 1,376,804,237
|
I_kwDOCUB6oc5SEF2N
| 19,087
|
v4.22.1 ErnieForMaskedLM Bug
|
{
"login": "wzjj98",
"id": 50035364,
"node_id": "MDQ6VXNlcjUwMDM1MzY0",
"avatar_url": "https://avatars.githubusercontent.com/u/50035364?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wzjj98",
"html_url": "https://github.com/wzjj98",
"followers_url": "https://api.github.com/users/wzjj98/followers",
"following_url": "https://api.github.com/users/wzjj98/following{/other_user}",
"gists_url": "https://api.github.com/users/wzjj98/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wzjj98/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wzjj98/subscriptions",
"organizations_url": "https://api.github.com/users/wzjj98/orgs",
"repos_url": "https://api.github.com/users/wzjj98/repos",
"events_url": "https://api.github.com/users/wzjj98/events{/privacy}",
"received_events_url": "https://api.github.com/users/wzjj98/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Hi @wzjj98, could you please share a bit more information about your PC setup and the script you're calling?\r\n\r\nI can confirm it shouldn't be a problem to import `ErnieForMaskedLM` with `transformers==4.22.1`",
"My GPU NVIDIA GeForce GTX 3090\r\nProcessor:Intel(R) Xeon(R) Platinum 8255C CPU @ 2.50GHz\r\n",
"\r\n\r\n\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,663
| 1,666
| 1,666
|
NONE
| null |
### System Info

@LysandreJik
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
from transformers import ErnieForMaskedLM
### Expected behavior
Failed to import transformers.models.ernie
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19087/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19087/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19086
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19086/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19086/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19086/events
|
https://github.com/huggingface/transformers/pull/19086
| 1,376,792,487
|
PR_kwDOCUB6oc4_JEVA
| 19,086
|
Added type hints for YolosForObjectDetection
|
{
"login": "kishore-s-15",
"id": 56688194,
"node_id": "MDQ6VXNlcjU2Njg4MTk0",
"avatar_url": "https://avatars.githubusercontent.com/u/56688194?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kishore-s-15",
"html_url": "https://github.com/kishore-s-15",
"followers_url": "https://api.github.com/users/kishore-s-15/followers",
"following_url": "https://api.github.com/users/kishore-s-15/following{/other_user}",
"gists_url": "https://api.github.com/users/kishore-s-15/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kishore-s-15/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kishore-s-15/subscriptions",
"organizations_url": "https://api.github.com/users/kishore-s-15/orgs",
"repos_url": "https://api.github.com/users/kishore-s-15/repos",
"events_url": "https://api.github.com/users/kishore-s-15/events{/privacy}",
"received_events_url": "https://api.github.com/users/kishore-s-15/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Thanks @Rocketknight1."
] | 1,663
| 1,663
| 1,663
|
CONTRIBUTOR
| null |
Based on Issue #16059
I have added the type hints for `YolosForObjectDetection` model.
@Rocketknight1 Could you kindly check if this is fine?
Thanks in advance.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19086/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19086/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19086",
"html_url": "https://github.com/huggingface/transformers/pull/19086",
"diff_url": "https://github.com/huggingface/transformers/pull/19086.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19086.patch",
"merged_at": 1663625065000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19085
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19085/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19085/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19085/events
|
https://github.com/huggingface/transformers/pull/19085
| 1,376,787,749
|
PR_kwDOCUB6oc4_JDgJ
| 19,085
|
Added Type hints for VIT MAE
|
{
"login": "kishore-s-15",
"id": 56688194,
"node_id": "MDQ6VXNlcjU2Njg4MTk0",
"avatar_url": "https://avatars.githubusercontent.com/u/56688194?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kishore-s-15",
"html_url": "https://github.com/kishore-s-15",
"followers_url": "https://api.github.com/users/kishore-s-15/followers",
"following_url": "https://api.github.com/users/kishore-s-15/following{/other_user}",
"gists_url": "https://api.github.com/users/kishore-s-15/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kishore-s-15/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kishore-s-15/subscriptions",
"organizations_url": "https://api.github.com/users/kishore-s-15/orgs",
"repos_url": "https://api.github.com/users/kishore-s-15/repos",
"events_url": "https://api.github.com/users/kishore-s-15/events{/privacy}",
"received_events_url": "https://api.github.com/users/kishore-s-15/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks @Rocketknight1."
] | 1,663
| 1,663
| 1,663
|
CONTRIBUTOR
| null |
Based on Issue #16059
While looking through the codebase, I found that the ViTMAE model doesn't had the type hints as suggested in Issue #16059. Added type hints for [ViTMAEModel](https://huggingface.co/docs/transformers/model_doc/vit_mae#transformers.ViTMAEModel) and [ViTMAEForPreTraining](https://huggingface.co/docs/transformers/model_doc/vit_mae#transformers.ViTMAEForPreTraining) models.
@Rocketknight1 Could you kindly check if this is fine?
Thanks in advance.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19085/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19085/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19085",
"html_url": "https://github.com/huggingface/transformers/pull/19085",
"diff_url": "https://github.com/huggingface/transformers/pull/19085.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19085.patch",
"merged_at": 1663591039000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19084
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19084/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19084/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19084/events
|
https://github.com/huggingface/transformers/pull/19084
| 1,376,772,016
|
PR_kwDOCUB6oc4_JApN
| 19,084
|
Added type hints to ResNetForImageClassification
|
{
"login": "kishore-s-15",
"id": 56688194,
"node_id": "MDQ6VXNlcjU2Njg4MTk0",
"avatar_url": "https://avatars.githubusercontent.com/u/56688194?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kishore-s-15",
"html_url": "https://github.com/kishore-s-15",
"followers_url": "https://api.github.com/users/kishore-s-15/followers",
"following_url": "https://api.github.com/users/kishore-s-15/following{/other_user}",
"gists_url": "https://api.github.com/users/kishore-s-15/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kishore-s-15/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kishore-s-15/subscriptions",
"organizations_url": "https://api.github.com/users/kishore-s-15/orgs",
"repos_url": "https://api.github.com/users/kishore-s-15/repos",
"events_url": "https://api.github.com/users/kishore-s-15/events{/privacy}",
"received_events_url": "https://api.github.com/users/kishore-s-15/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks @Rocketknight1."
] | 1,663
| 1,663
| 1,663
|
CONTRIBUTOR
| null |
Based on Issue #16059.
While looking into Issue #16059, I found that the `ResNetForImageClassification` model's type hints are inconsistent with the [docs](https://huggingface.co/docs/transformers/model_doc/resnet#transformers.ResNetForImageClassification.forward). Modified it to be consistent with the docs.
@Rocketknight1 Could you kindly check if this is fine?
This is my First PR for the Transformers library. Thanks in advance.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19084/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19084/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19084",
"html_url": "https://github.com/huggingface/transformers/pull/19084",
"diff_url": "https://github.com/huggingface/transformers/pull/19084.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19084.patch",
"merged_at": 1663591333000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19083
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19083/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19083/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19083/events
|
https://github.com/huggingface/transformers/issues/19083
| 1,376,680,576
|
I_kwDOCUB6oc5SDnqA
| 19,083
|
some data is dropped when encoding by LayoutLMv3Processor
|
{
"login": "jack-gits",
"id": 30545972,
"node_id": "MDQ6VXNlcjMwNTQ1OTcy",
"avatar_url": "https://avatars.githubusercontent.com/u/30545972?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jack-gits",
"html_url": "https://github.com/jack-gits",
"followers_url": "https://api.github.com/users/jack-gits/followers",
"following_url": "https://api.github.com/users/jack-gits/following{/other_user}",
"gists_url": "https://api.github.com/users/jack-gits/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jack-gits/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jack-gits/subscriptions",
"organizations_url": "https://api.github.com/users/jack-gits/orgs",
"repos_url": "https://api.github.com/users/jack-gits/repos",
"events_url": "https://api.github.com/users/jack-gits/events{/privacy}",
"received_events_url": "https://api.github.com/users/jack-gits/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[] | 1,663
| 1,663
| 1,663
|
NONE
| null |
### System Info
I'm using LayoutLMv3Processor to encoding.
when I input (1, 82, 4) as boxes, the processor will extends the boxes to (1, 512, 4), but some boxes of input is dropped by processor as I can't find them in the encoding;
seems the last n boxes is dropped.
**_tokenizer = LayoutLMv3TokenizerFast.from_pretrained('microsoft/layoutlmv3-base')
processor = LayoutLMv3Processor(LayoutLMv3FeatureExtractor(apply_ocr=False), tokenizer)_**
**boxes of before encoding (512, 4)
boxes of after encoding (82, 4)
boxes of before encoding without duplicated (58, 4)
boxes of after encoding without duplicated (82, 4)**
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
import numpy as np
from PIL import Image
from transformers import LayoutLMv3Processor, LayoutLMv3TokenizerFast, LayoutLMv3FeatureExtractor, \
LayoutLMv3ForTokenClassification, AutoModelForTokenClassification, AutoConfig
from inference_util import prepare_annotation,load_original_dataset
image_paths,bboxes,ner_tags=load_original_dataset("cro_vl_fr/","test")
tokenizer = LayoutLMv3TokenizerFast.from_pretrained('microsoft/layoutlmv3-base')
processor = LayoutLMv3Processor(LayoutLMv3FeatureExtractor(apply_ocr=False), tokenizer)
item = image_paths[0]
image = Image.open( item).convert("RGB")
# get word-level annotations
image,words,boxes=prepare_annotation(image,bboxes[0])
boxes_2_points= np.hstack((np.array(boxes)[:,0:2],np.array(boxes)[:,4:6])).astype(int)
encoding = processor(image, words, boxes=boxes_2_points,
padding="max_length", truncation=True,
return_tensors="pt")
for k,v in encoding.items():
encoding[k] = v.squeeze()
token_boxes = encoding['bbox'].numpy()
print("boxes of before encoding",np.shape(token_boxes))
print("boxes of after encoding",np.shape(boxes_2_points))
token_boxes=[tuple(a) for a in token_boxes]
token_boxes=np.array(list(set(token_boxes)))
boxes_2_points=[tuple(a) for a in boxes_2_points]
boxes_2_points=np.array(list(set(boxes_2_points)))
print("boxes of before encoding without duplicated ",np.shape(token_boxes))
print("boxes of after encoding without duplicated",np.shape(boxes_2_points))
### Expected behavior
original boxes should not be drop.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19083/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19083/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19082
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19082/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19082/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19082/events
|
https://github.com/huggingface/transformers/issues/19082
| 1,376,664,109
|
I_kwDOCUB6oc5SDjot
| 19,082
|
`--with_tracking` doesn't seem to work
|
{
"login": "Ericmututu",
"id": 49343975,
"node_id": "MDQ6VXNlcjQ5MzQzOTc1",
"avatar_url": "https://avatars.githubusercontent.com/u/49343975?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ericmututu",
"html_url": "https://github.com/Ericmututu",
"followers_url": "https://api.github.com/users/Ericmututu/followers",
"following_url": "https://api.github.com/users/Ericmututu/following{/other_user}",
"gists_url": "https://api.github.com/users/Ericmututu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ericmututu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ericmututu/subscriptions",
"organizations_url": "https://api.github.com/users/Ericmututu/orgs",
"repos_url": "https://api.github.com/users/Ericmututu/repos",
"events_url": "https://api.github.com/users/Ericmututu/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ericmututu/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"@muellerzr will correct me if I'm wrong, but this API is for external trackers (TensorBoard, WandB etc.). To save results on disk, just use regular python code like `json.dump`.",
"@sgugger exactly. @Ericmututu do you have any tracking libraries installed on your system?\r\n\r\n(I can probably raise an error in the scripts if it's tried and none are available)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,663
| 1,666
| 1,666
|
NONE
| null |
### System Info
Hi,
When I enable --with_tracking within run_glue_no_trainer.py, nothing seems to happen. After the training is over, I don't find any log files in output_dir. How do I save these training results (e.g., accuracy, training_loss, ...) as shown in the following figure?

Thanks in advance!
### Who can help?
@sgugger @muellerzr
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
`Execute the script`
python run_glue_no_trainer.py \
--model_name_or_path bert-large-uncased \
--task_name mrpc \
--output_dir ./output_dir \
--per_device_train_batch_size 32 \
--per_device_eval_batch_size 32 \
--learning_rate 2e-5 \
--num_train_epochs 100 \
--seed 42 \
--with_tracking
### Expected behavior
I want to save the results of the training process in a log file by enabling `--with_tracking`.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19082/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19082/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19081
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19081/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19081/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19081/events
|
https://github.com/huggingface/transformers/issues/19081
| 1,376,641,594
|
I_kwDOCUB6oc5SDeI6
| 19,081
|
add Unified-IO
|
{
"login": "thedarkzeno",
"id": 45200346,
"node_id": "MDQ6VXNlcjQ1MjAwMzQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/45200346?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thedarkzeno",
"html_url": "https://github.com/thedarkzeno",
"followers_url": "https://api.github.com/users/thedarkzeno/followers",
"following_url": "https://api.github.com/users/thedarkzeno/following{/other_user}",
"gists_url": "https://api.github.com/users/thedarkzeno/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thedarkzeno/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thedarkzeno/subscriptions",
"organizations_url": "https://api.github.com/users/thedarkzeno/orgs",
"repos_url": "https://api.github.com/users/thedarkzeno/repos",
"events_url": "https://api.github.com/users/thedarkzeno/events{/privacy}",
"received_events_url": "https://api.github.com/users/thedarkzeno/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] |
open
| false
| null |
[] |
[
"Hi, have you started working on the issue? Do you plan to integrate it yourself?",
"I'd like to work on this issue, is there any documentation on adding new models that I should follow?",
"I would like to work on this one.",
"@NielsRogge @alaradirik If no one else is currently working on adding this model, I would like to work on it.",
"Hi @kumar-devesh , I'm working on it (made some progress toward getting a working version of the Discrete VAE in Torch) but @osanseviero told me that it would be better to verify if there's interest from the development team. If they're ok with it then we could work on it together.",
"cc @sgugger @amyeroberts ",
"Hi @ChanBong @kumar-devesh @alceballosa, Unified-IO would be a great addition to the library.\r\n\r\nIf you are not familiar with contributing to transformers, you can refer to the [guidelines](https://huggingface.co/docs/transformers/add_new_model) to get started. I'd recommend checking if you can run the original repo without any issues and get the expected results first. \r\n\r\nHere are some summarised points that might help with model addition:\r\n- Each model, including different checkpoints of the same model, has it's own repo on the Hub (see [DETR-ResNet-50 repo](https://huggingface.co/facebook/detr-resnet-50) as an example). This is basically a git repo that stores the checkpoint specific configuration, preprocessing configuration and the model weights.\r\n- The code added to transformers acts as a boilerplate to initialise the model and load different checkpoints - Unified-IO trained on different datasets and/or with different resolution and/or larger / smaller architecture.\r\n- configuration_unifiedio.py should contain all the hyperparameters, the input image size and architectural details (e.g. number of hidden layers) to initialize the model.\r\n- Multi-modal models (e.g. CLIP, ALIGN) have a `Processor` class that capsulates `Tokenizer` and `ImageProcessor` classes that preprocesses the text and image inputs.\r\n - image_processing_unifiedio.py should contain the ImageProcessor class that takes in the raw input image and preprocesses it to the format expected as input to the model (resizing to a fixed input size, normalization, cropping, etc.)\r\n - tokenizer_unifiedio.py should contain the Tokenizer class that preprocesses the raw input text.\r\n - processor_unifiedio.py combines the two to preprocess image-text pair inputs.\r\n- modeling_unifiedio.py should contain the model definition.\r\n- The conversion script:\r\n - Loads the pretrained original model and randomly initializes the HF implementation with the corresponding configuration\r\n - Copies the pretrained parameters (weights and biases) of the original model to the corresponding parameters of the randomly initialized HF model (the conversion step)\r\n - Forward propagates an arbitrary input (text + image in this case) through both the original model and converted HF model and checks if the outputs match\r\n - Uploads the converted HF model to the hub\r\n - Each model, tokenizer, image processor and processor class is tested with scripts under `tests/models/<MODEL_NAME>/ `, you can refer to other test files to see what tests to add.\r\n\r\nOnce you are done, you would need to run the following commands to check the PR passes all CI tests:\r\n```\r\nmake style\r\nmake quality\r\nmake repo-consistency\r\n\r\nRUN_SLOW=TRUE pytest tests/models/unifiedio/test_modeling_unifiedio.py\r\nRUN_SLOW=TRUE pytest tests/models/unifiedio/test_image_processor_unifiedio.py\r\nRUN_SLOW=TRUE pytest tests/models/unifiedio/test_tokenizer_unifiedio.py\r\nRUN_SLOW=TRUE pytest tests/models/unifiedio/test_processor_unifiedio.py\r\n```\r\n\r\nWe can do an in-depth review or create a Slack channel to address questions and issues once there is a draft PR.\r\n\r\nHope this helps!"
] | 1,663
| 1,678
| null |
CONTRIBUTOR
| null |
### Model description
I'd like to request the addition of the Unified-IO model. It is a multimodal model capable of visual question answering, image generation and more...
the repo is this: https://github.com/allenai/unified-io-inference
the paper: [Unified-IO: Sequential Modeling for Generally Applicable Vision Models](https://arxiv.org/abs/2206.08916)
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
https://github.com/allenai/unified-io-inference
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19081/reactions",
"total_count": 4,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 4,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19081/timeline
| null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/19080
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19080/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19080/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19080/events
|
https://github.com/huggingface/transformers/pull/19080
| 1,376,566,857
|
PR_kwDOCUB6oc4_IZA0
| 19,080
|
Bump oauthlib from 3.2.0 to 3.2.1 in /examples/research_projects/decision_transformer
|
{
"login": "dependabot[bot]",
"id": 49699333,
"node_id": "MDM6Qm90NDk2OTkzMzM=",
"avatar_url": "https://avatars.githubusercontent.com/in/29110?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dependabot%5Bbot%5D",
"html_url": "https://github.com/apps/dependabot",
"followers_url": "https://api.github.com/users/dependabot%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/dependabot%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/dependabot%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dependabot%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dependabot%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/dependabot%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/dependabot%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/dependabot%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/dependabot%5Bbot%5D/received_events",
"type": "Bot",
"site_admin": false
}
|
[
{
"id": 1905493434,
"node_id": "MDU6TGFiZWwxOTA1NDkzNDM0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies",
"name": "dependencies",
"color": "0366d6",
"default": false,
"description": "Pull requests that update a dependency file"
}
] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,663
| 1,663
| 1,663
|
CONTRIBUTOR
| null |
Bumps [oauthlib](https://github.com/oauthlib/oauthlib) from 3.2.0 to 3.2.1.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/oauthlib/oauthlib/releases">oauthlib's releases</a>.</em></p>
<blockquote>
<h2>3.2.1</h2>
<h2>In short</h2>
<p>OAuth2.0 Provider:</p>
<ul>
<li><a href="https://github-redirect.dependabot.com/oauthlib/oauthlib/issues/803">#803</a> : Metadata endpoint support of non-HTTPS</li>
<li>CVE-2022-36087</li>
</ul>
<p>OAuth1.0:</p>
<ul>
<li><a href="https://github-redirect.dependabot.com/oauthlib/oauthlib/issues/818">#818</a> : Allow IPv6 being parsed by signature</li>
</ul>
<p>General:</p>
<ul>
<li>Improved and fixed documentation warnings.</li>
<li>Cosmetic changes based on isort</li>
</ul>
<h2>What's Changed</h2>
<ul>
<li>add missing slots to TokenBase by <a href="https://github.com/ariebovenberg"><code>@ariebovenberg</code></a> in <a href="https://github-redirect.dependabot.com/oauthlib/oauthlib/pull/804">oauthlib/oauthlib#804</a></li>
<li>Add CORS support for Refresh Token Grant. by <a href="https://github.com/luhn"><code>@luhn</code></a> in <a href="https://github-redirect.dependabot.com/oauthlib/oauthlib/pull/806">oauthlib/oauthlib#806</a></li>
<li>GitHub Action to lint Python code by <a href="https://github.com/cclauss"><code>@cclauss</code></a> in <a href="https://github-redirect.dependabot.com/oauthlib/oauthlib/pull/797">oauthlib/oauthlib#797</a></li>
<li>Docs: fix Sphinx warnings for better ReadTheDocs generation by <a href="https://github.com/JonathanHuot"><code>@JonathanHuot</code></a> in <a href="https://github-redirect.dependabot.com/oauthlib/oauthlib/pull/807">oauthlib/oauthlib#807</a></li>
<li>Allow non-HTTPS issuer when OAUTHLIB_INSECURE_TRANSPORT. by <a href="https://github.com/luhn"><code>@luhn</code></a> in <a href="https://github-redirect.dependabot.com/oauthlib/oauthlib/pull/803">oauthlib/oauthlib#803</a></li>
<li>chore: fix typo in test by <a href="https://github.com/tamanobi"><code>@tamanobi</code></a> in <a href="https://github-redirect.dependabot.com/oauthlib/oauthlib/pull/816">oauthlib/oauthlib#816</a></li>
<li>Fix typo in server.rst by <a href="https://github.com/NemanjaT"><code>@NemanjaT</code></a> in <a href="https://github-redirect.dependabot.com/oauthlib/oauthlib/pull/819">oauthlib/oauthlib#819</a></li>
<li>Fixed isort imports by <a href="https://github.com/dasm"><code>@dasm</code></a> in <a href="https://github-redirect.dependabot.com/oauthlib/oauthlib/pull/820">oauthlib/oauthlib#820</a></li>
<li>docs: Fix a few typos by <a href="https://github.com/timgates42"><code>@timgates42</code></a> in <a href="https://github-redirect.dependabot.com/oauthlib/oauthlib/pull/822">oauthlib/oauthlib#822</a></li>
<li>docs: fix typos by <a href="https://github.com/kianmeng"><code>@kianmeng</code></a> in <a href="https://github-redirect.dependabot.com/oauthlib/oauthlib/pull/823">oauthlib/oauthlib#823</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/ariebovenberg"><code>@ariebovenberg</code></a> made their first contribution in <a href="https://github-redirect.dependabot.com/oauthlib/oauthlib/pull/804">oauthlib/oauthlib#804</a></li>
<li><a href="https://github.com/tamanobi"><code>@tamanobi</code></a> made their first contribution in <a href="https://github-redirect.dependabot.com/oauthlib/oauthlib/pull/816">oauthlib/oauthlib#816</a></li>
<li><a href="https://github.com/NemanjaT"><code>@NemanjaT</code></a> made their first contribution in <a href="https://github-redirect.dependabot.com/oauthlib/oauthlib/pull/819">oauthlib/oauthlib#819</a></li>
<li><a href="https://github.com/kianmeng"><code>@kianmeng</code></a> made their first contribution in <a href="https://github-redirect.dependabot.com/oauthlib/oauthlib/pull/823">oauthlib/oauthlib#823</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a href="https://github.com/oauthlib/oauthlib/compare/v3.2.0...v3.2.1">https://github.com/oauthlib/oauthlib/compare/v3.2.0...v3.2.1</a></p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/oauthlib/oauthlib/blob/master/CHANGELOG.rst">oauthlib's changelog</a>.</em></p>
<blockquote>
<h2>3.2.1 (2022-09-09)</h2>
<p>OAuth2.0 Provider:</p>
<ul>
<li><a href="https://github-redirect.dependabot.com/oauthlib/oauthlib/issues/803">#803</a>: Metadata endpoint support of non-HTTPS</li>
<li>CVE-2022-36087</li>
</ul>
<p>OAuth1.0:</p>
<ul>
<li><a href="https://github-redirect.dependabot.com/oauthlib/oauthlib/issues/818">#818</a>: Allow IPv6 being parsed by signature</li>
</ul>
<p>General:</p>
<ul>
<li>Improved and fixed documentation warnings.</li>
<li>Cosmetic changes based on isort</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/oauthlib/oauthlib/commit/88bb1562930a9bd9368bf26120655794d90d9585"><code>88bb156</code></a> Updated date and authors</li>
<li><a href="https://github.com/oauthlib/oauthlib/commit/1a45d9790543673208e603e13a7be4aa4cba7339"><code>1a45d97</code></a> Prepare 3.2.1 release</li>
<li><a href="https://github.com/oauthlib/oauthlib/commit/0adbbe10ed8ef822d1c780987fffc56670ce3f9f"><code>0adbbe1</code></a> docs: fix typos</li>
<li><a href="https://github.com/oauthlib/oauthlib/commit/6569ec3c062be7268f4a17f5a371aa29f1bcfa4a"><code>6569ec3</code></a> docs: Fix a few typos</li>
<li><a href="https://github.com/oauthlib/oauthlib/commit/bdc486e2bc3a188027a4ebec3a3013e64023ce62"><code>bdc486e</code></a> Fixed isort imports</li>
<li><a href="https://github.com/oauthlib/oauthlib/commit/7db45bda96ea6f5fde1186e8fd43d75ce6b95ab5"><code>7db45bd</code></a> Fix typo in server.rst</li>
<li><a href="https://github.com/oauthlib/oauthlib/commit/b14ad85921db2406ecaf5927a8be08a7566c236e"><code>b14ad85</code></a> chore: s/bode_code_verifier/body_code_verifier/g</li>
<li><a href="https://github.com/oauthlib/oauthlib/commit/b123283ba3d41acb3e787fdf68bd5907972b4bad"><code>b123283</code></a> Allow non-HTTPS issuer when OAUTHLIB_INSECURE_TRANSPORT. (<a href="https://github-redirect.dependabot.com/oauthlib/oauthlib/issues/803">#803</a>)</li>
<li><a href="https://github.com/oauthlib/oauthlib/commit/2f887b5a070bf617a471c573ad52fb58251c61af"><code>2f887b5</code></a> Docs: fix Sphinx warnings for better ReadTheDocs generation (<a href="https://github-redirect.dependabot.com/oauthlib/oauthlib/issues/807">#807</a>)</li>
<li><a href="https://github.com/oauthlib/oauthlib/commit/d4bafd9f1d0eba3766e933b1ac598cbbf37b8914"><code>d4bafd9</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/oauthlib/oauthlib/issues/797">#797</a> from cclauss/patch-2</li>
<li>Additional commits viewable in <a href="https://github.com/oauthlib/oauthlib/compare/v3.2.0...v3.2.1">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
- `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language
- `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language
- `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language
- `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details>
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19080/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19080/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19080",
"html_url": "https://github.com/huggingface/transformers/pull/19080",
"diff_url": "https://github.com/huggingface/transformers/pull/19080.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19080.patch",
"merged_at": 1663858900000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19079
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19079/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19079/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19079/events
|
https://github.com/huggingface/transformers/issues/19079
| 1,376,540,175
|
I_kwDOCUB6oc5SDFYP
| 19,079
|
Small Typo in Docs GenerationMixin for use_cache parameter
|
{
"login": "ankrgyl",
"id": 565363,
"node_id": "MDQ6VXNlcjU2NTM2Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/565363?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ankrgyl",
"html_url": "https://github.com/ankrgyl",
"followers_url": "https://api.github.com/users/ankrgyl/followers",
"following_url": "https://api.github.com/users/ankrgyl/following{/other_user}",
"gists_url": "https://api.github.com/users/ankrgyl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ankrgyl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ankrgyl/subscriptions",
"organizations_url": "https://api.github.com/users/ankrgyl/orgs",
"repos_url": "https://api.github.com/users/ankrgyl/repos",
"events_url": "https://api.github.com/users/ankrgyl/events{/privacy}",
"received_events_url": "https://api.github.com/users/ankrgyl/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"@ankrgyl that's a good finding 👍 A PR contribution would be deeply appreciated (for TF and FLAX as well, if the typo also exists there), but I will pick it up otherwise :)"
] | 1,663
| 1,664
| 1,664
|
CONTRIBUTOR
| null |
### System Info
In the text_generation docs (https://huggingface.co/docs/transformers/main_classes/text_generation), `use_cache` does not show up as its own line in the list of parameters.
<img width="973" alt="Screen Shot 2022-09-16 at 6 09 30 PM" src="https://user-images.githubusercontent.com/565363/190812185-35c6eb4d-fbbf-4d17-ad86-6c1d2083c0e0.png">
I think this is a small typo due to an extra `:` in the code. Happy to fix.
### Who can help?
@LysandreJik
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
The website link
### Expected behavior
`use_cache` should be on its own line
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19079/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19079/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19078
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19078/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19078/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19078/events
|
https://github.com/huggingface/transformers/pull/19078
| 1,376,459,325
|
PR_kwDOCUB6oc4_IFUs
| 19,078
|
Add tests for legacy load by url and fix bugs
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,663
| 1,663
| 1,663
|
COLLABORATOR
| null |
# What does this PR do?
This PR adds to tests we can load object with the single url to the relevant file, which is a deprecated behavior until v5, but that we unintentionally broke early because it's not tested. The tests added are marked to be removed at v5 (when they test deprecated behavior).
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19078/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19078/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19078",
"html_url": "https://github.com/huggingface/transformers/pull/19078",
"diff_url": "https://github.com/huggingface/transformers/pull/19078.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19078.patch",
"merged_at": 1663363202000
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.