repo_id stringlengths 15 89 | file_path stringlengths 27 180 | content stringlengths 1 2.23M | __index_level_0__ int64 0 0 |
|---|---|---|---|
hf_public_repos/transformers/docs/source/ja | hf_public_repos/transformers/docs/source/ja/tasks/token_classification.md | <!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Token classification
[[open-in-colab]]
<Youtube id="wVHdVlPScxA"/>
ããŒã¯ã³åé¡ã§ã¯ãæå
ã®åã
ã®ããŒã¯ã³ã«ã©ãã«ãå²ãåœãŠãŸããæãäžè¬çãªããŒã¯ã³åé¡ã¿ã¹ã¯ã® 1 ã€ã¯ãåºæè¡šçŸèªè (NER) ã§ãã NER ã¯ã人ãå Žæãçµç¹ãªã©ãæå
ã®åãšã³ãã£ãã£ã®ã©ãã«ãèŠã€ããããšããŸãã
ãã®ã¬ã€ãã§ã¯ãæ¬¡ã®æ¹æ³ã説æããŸãã
1. [WNUT 17](https://huggingface.co/datasets/wnut_17) ããŒã¿ã»ããã§ [DistilBERT](https://huggingface.co/distilbert-base-uncased) ã埮調æŽããŠãæ°ãããšã³ãã£ãã£ãæ€åºããŸãã
2. 埮調æŽãããã¢ãã«ãæšè«ã«äœ¿çšããŸãã
<Tip>
ãã®ãã¥ãŒããªã¢ã«ã§èª¬æããã¿ã¹ã¯ã¯ã次ã®ã¢ãã« ã¢ãŒããã¯ãã£ã§ãµããŒããããŠããŸãã
<!--This tip is automatically generated by `make fix-copies`, do not fill manually!-->
[ALBERT](../model_doc/albert), [BERT](../model_doc/bert), [BigBird](../model_doc/big_bird), [BioGpt](../model_doc/biogpt), [BLOOM](../model_doc/bloom), [BROS](../model_doc/bros), [CamemBERT](../model_doc/camembert), [CANINE](../model_doc/canine), [ConvBERT](../model_doc/convbert), [Data2VecText](../model_doc/data2vec-text), [DeBERTa](../model_doc/deberta), [DeBERTa-v2](../model_doc/deberta-v2), [DistilBERT](../model_doc/distilbert), [ELECTRA](../model_doc/electra), [ERNIE](../model_doc/ernie), [ErnieM](../model_doc/ernie_m), [ESM](../model_doc/esm), [Falcon](../model_doc/falcon), [FlauBERT](../model_doc/flaubert), [FNet](../model_doc/fnet), [Funnel Transformer](../model_doc/funnel), [GPT-Sw3](../model_doc/gpt-sw3), [OpenAI GPT-2](../model_doc/gpt2), [GPTBigCode](../model_doc/gpt_bigcode), [GPT Neo](../model_doc/gpt_neo), [GPT NeoX](../model_doc/gpt_neox), [I-BERT](../model_doc/ibert), [LayoutLM](../model_doc/layoutlm), [LayoutLMv2](../model_doc/layoutlmv2), [LayoutLMv3](../model_doc/layoutlmv3), [LiLT](../model_doc/lilt), [Longformer](../model_doc/longformer), [LUKE](../model_doc/luke), [MarkupLM](../model_doc/markuplm), [MEGA](../model_doc/mega), [Megatron-BERT](../model_doc/megatron-bert), [MobileBERT](../model_doc/mobilebert), [MPNet](../model_doc/mpnet), [MPT](../model_doc/mpt), [MRA](../model_doc/mra), [Nezha](../model_doc/nezha), [Nyströmformer](../model_doc/nystromformer), [QDQBert](../model_doc/qdqbert), [RemBERT](../model_doc/rembert), [RoBERTa](../model_doc/roberta), [RoBERTa-PreLayerNorm](../model_doc/roberta-prelayernorm), [RoCBert](../model_doc/roc_bert), [RoFormer](../model_doc/roformer), [SqueezeBERT](../model_doc/squeezebert), [XLM](../model_doc/xlm), [XLM-RoBERTa](../model_doc/xlm-roberta), [XLM-RoBERTa-XL](../model_doc/xlm-roberta-xl), [XLNet](../model_doc/xlnet), [X-MOD](../model_doc/xmod), [YOSO](../model_doc/yoso)
<!--End of the generated tip-->
</Tip>
å§ããåã«ãå¿
èŠãªã©ã€ãã©ãªããã¹ãŠã€ã³ã¹ããŒã«ãããŠããããšã確èªããŠãã ããã
```bash
pip install transformers datasets evaluate seqeval
```
ã¢ãã«ãã¢ããããŒãããŠã³ãã¥ããã£ãšå
±æã§ããããã«ãHugging Face ã¢ã«ãŠã³ãã«ãã°ã€ã³ããããšããå§ãããŸããããã³ããã衚瀺ãããããããŒã¯ã³ãå
¥åããŠãã°ã€ã³ããŸãã
```py
>>> from huggingface_hub import notebook_login
>>> notebook_login()
```
## Load WNUT 17 dataset
ãŸããð€ ããŒã¿ã»ãã ã©ã€ãã©ãªãã WNUT 17 ããŒã¿ã»ãããããŒãããŸãã
```py
>>> from datasets import load_dataset
>>> wnut = load_dataset("wnut_17")
```
次ã«ãäŸãèŠãŠã¿ãŸãããã
```py
>>> wnut["train"][0]
{'id': '0',
'ner_tags': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 7, 8, 8, 0, 7, 0, 0, 0, 0, 0, 0, 0, 0],
'tokens': ['@paulwalk', 'It', "'s", 'the', 'view', 'from', 'where', 'I', "'m", 'living', 'for', 'two', 'weeks', '.', 'Empire', 'State', 'Building', '=', 'ESB', '.', 'Pretty', 'bad', 'storm', 'here', 'last', 'evening', '.']
}
```
`ner_tags`å
ã®åæ°åã¯ãšã³ãã£ãã£ã衚ããŸããæ°å€ãã©ãã«åã«å€æããŠããšã³ãã£ãã£ãäœã§ãããã調ã¹ãŸãã
```py
>>> label_list = wnut["train"].features[f"ner_tags"].feature.names
>>> label_list
[
"O",
"B-corporation",
"I-corporation",
"B-creative-work",
"I-creative-work",
"B-group",
"I-group",
"B-location",
"I-location",
"B-person",
"I-person",
"B-product",
"I-product",
]
```
å `ner_tag` ã®åã«ä»ãæåã¯ããšã³ãã£ãã£ã®ããŒã¯ã³ã®äœçœ®ã瀺ããŸãã
- `B-` ã¯ãšã³ãã£ãã£ã®å§ãŸãã瀺ããŸãã
- `I-` ã¯ãããŒã¯ã³ãåããšã³ãã£ãã£å
ã«å«ãŸããŠããããšã瀺ããŸã (ããšãã°ã`State` ããŒã¯ã³ã¯æ¬¡ã®ãããªãšã³ãã£ãã£ã®äžéšã§ã)
`Empire State Building`ïŒã
- `0` ã¯ãããŒã¯ã³ãã©ã®ãšã³ãã£ãã£ã«ã察å¿ããªãããšã瀺ããŸãã
## Preprocess
<Youtube id="iY2AZYdZAr0"/>
次ã®ã¹ãããã§ã¯ãDistilBERT ããŒã¯ãã€ã¶ãŒãããŒãããŠ`tokens`ãã£ãŒã«ããååŠçããŸãã
```py
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased")
```
äžã® `tokens`ãã£ãŒã«ãã®äŸã§èŠãããã«ãå
¥åã¯ãã§ã«ããŒã¯ã³åãããŠããããã§ããããããå®éã«ã¯å
¥åã¯ãŸã ããŒã¯ã³åãããŠããªããããåèªããµãã¯ãŒãã«ããŒã¯ã³åããã«ã¯`is_split_into_words=True` ãèšå®ããå¿
èŠããããŸããäŸãã°ïŒ
```py
>>> example = wnut["train"][0]
>>> tokenized_input = tokenizer(example["tokens"], is_split_into_words=True)
>>> tokens = tokenizer.convert_ids_to_tokens(tokenized_input["input_ids"])
>>> tokens
['[CLS]', '@', 'paul', '##walk', 'it', "'", 's', 'the', 'view', 'from', 'where', 'i', "'", 'm', 'living', 'for', 'two', 'weeks', '.', 'empire', 'state', 'building', '=', 'es', '##b', '.', 'pretty', 'bad', 'storm', 'here', 'last', 'evening', '.', '[SEP]']
```
ãã ããããã«ããããã€ãã®ç¹å¥ãªããŒã¯ã³ `[CLS]` ãš `[SEP]` ã远å ããããµãã¯ãŒãã®ããŒã¯ã³åã«ããå
¥åãšã©ãã«ã®éã«äžäžèŽãçããŸãã 1 ã€ã®ã©ãã«ã«å¯Ÿå¿ãã 1 ã€ã®åèªã 2 ã€ã®ãµãã¯ãŒãã«åå²ã§ããããã«ãªããŸãããæ¬¡ã®æ¹æ³ã§ããŒã¯ã³ãšã©ãã«ãå調æŽããå¿
èŠããããŸãã
1. [`word_ids`](https://huggingface.co/docs/transformers/main_classes/tokenizer#transformers.BatchEncoding.word_ids) ã¡ãœããã䜿çšããŠããã¹ãŠã®ããŒã¯ã³ã察å¿ããåèªã«ãããã³ã°ããŸãã
2. ç¹å¥ãªããŒã¯ã³ `[CLS]` ãš `[SEP]` ã«ã©ãã« `-100` ãå²ãåœãŠããããã PyTorch æå€±é¢æ°ã«ãã£ãŠç¡èŠãããããã«ããŸã ([CrossEntropyLoss](https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html))ã
3. ç¹å®ã®åèªã®æåã®ããŒã¯ã³ã®ã¿ã«ã©ãã«ãä»ããŸããåãåèªã®ä»ã®ãµãããŒã¯ã³ã« `-100`ãå²ãåœãŠãŸãã
ããŒã¯ã³ãšã©ãã«ãå調æŽããã·ãŒã±ã³ã¹ã DistilBERT ã®æå€§å
¥åé·ä»¥äžã«åãè©°ãã颿°ãäœæããæ¹æ³ã次ã«ç€ºããŸãã
```py
>>> def tokenize_and_align_labels(examples):
... tokenized_inputs = tokenizer(examples["tokens"], truncation=True, is_split_into_words=True)
... labels = []
... for i, label in enumerate(examples[f"ner_tags"]):
... word_ids = tokenized_inputs.word_ids(batch_index=i) # Map tokens to their respective word.
... previous_word_idx = None
... label_ids = []
... for word_idx in word_ids: # Set the special tokens to -100.
... if word_idx is None:
... label_ids.append(-100)
... elif word_idx != previous_word_idx: # Only label the first token of a given word.
... label_ids.append(label[word_idx])
... else:
... label_ids.append(-100)
... previous_word_idx = word_idx
... labels.append(label_ids)
... tokenized_inputs["labels"] = labels
... return tokenized_inputs
```
ããŒã¿ã»ããå
šäœã«ååŠç颿°ãé©çšããã«ã¯ãð€ Datasets [`~datasets.Dataset.map`] 颿°ã䜿çšããŸãã `batched=True` ãèšå®ããŠããŒã¿ã»ããã®è€æ°ã®èŠçŽ ãäžåºŠã«åŠçããããšã§ã`map` 颿°ãé«éåã§ããŸãã
```py
>>> tokenized_wnut = wnut.map(tokenize_and_align_labels, batched=True)
```
次ã«ã[`DataCollatââorWithPadding`] ã䜿çšããŠãµã³ãã«ã®ããããäœæããŸããããŒã¿ã»ããå
šäœãæå€§é·ãŸã§ããã£ã³ã°ããã®ã§ã¯ãªããç
§åäžã«ãããå
ã®æé·ã®é·ããŸã§æã *åçã«ããã£ã³ã°* ããæ¹ãå¹ççã§ãã
<frameworkcontent>
<pt>
```py
>>> from transformers import DataCollatorForTokenClassification
>>> data_collator = DataCollatorForTokenClassification(tokenizer=tokenizer)
```
</pt>
<tf>
```py
>>> from transformers import DataCollatorForTokenClassification
>>> data_collator = DataCollatorForTokenClassification(tokenizer=tokenizer, return_tensors="tf")
```
</tf>
</frameworkcontent>
## Evaluate
ãã¬ãŒãã³ã°äžã«ã¡ããªã¯ã¹ãå«ãããšãå€ãã®å Žåãã¢ãã«ã®ããã©ãŒãã³ã¹ãè©äŸ¡ããã®ã«åœ¹ç«ã¡ãŸãã ð€ [Evaluate](https://huggingface.co/docs/evaluate/index) ã©ã€ãã©ãªã䜿çšããŠãè©äŸ¡ã¡ãœããããã°ããããŒãã§ããŸãããã®ã¿ã¹ã¯ã§ã¯ã[seqeval](https://huggingface.co/spaces/evaluate-metric/seqeval) ãã¬ãŒã ã¯ãŒã¯ãèªã¿èŸŒã¿ãŸã (ð€ Evaluate [ã¯ã€ã㯠ãã¢ãŒ](https://huggingface.co/docs/evaluate/a_quick_tour) ãåç
§ããŠãã ãã) ) ã¡ããªã¯ã¹ã®èªã¿èŸŒã¿ãšèšç®ã®æ¹æ³ã«ã€ããŠè©³ããã¯ããã¡ããã芧ãã ãã)ã Seqeval ã¯å®éã«ã粟床ãåçŸçãF1ã粟床ãªã©ã®ããã€ãã®ã¹ã³ã¢ãçæããŸãã
```py
>>> import evaluate
>>> seqeval = evaluate.load("seqeval")
```
ãŸã NER ã©ãã«ãååŸããŠãããçã®äºæž¬ãšçã®ã©ãã«ã [`~evaluate.EvaluationModule.compute`] ã«æž¡ããŠã¹ã³ã¢ãèšç®ãã颿°ãäœæããŸãã
```py
>>> import numpy as np
>>> labels = [label_list[i] for i in example[f"ner_tags"]]
>>> def compute_metrics(p):
... predictions, labels = p
... predictions = np.argmax(predictions, axis=2)
... true_predictions = [
... [label_list[p] for (p, l) in zip(prediction, label) if l != -100]
... for prediction, label in zip(predictions, labels)
... ]
... true_labels = [
... [label_list[l] for (p, l) in zip(prediction, label) if l != -100]
... for prediction, label in zip(predictions, labels)
... ]
... results = seqeval.compute(predictions=true_predictions, references=true_labels)
... return {
... "precision": results["overall_precision"],
... "recall": results["overall_recall"],
... "f1": results["overall_f1"],
... "accuracy": results["overall_accuracy"],
... }
```
ããã§`compute_metrics`颿°ã®æºåãæŽããŸããããã¬ãŒãã³ã°ãã»ããã¢ãããããšãã«ãã®é¢æ°ã«æ»ããŸãã
## Train
ã¢ãã«ã®ãã¬ãŒãã³ã°ãéå§ããåã«ã`id2label`ãš`label2id`ã䜿çšããŠãäºæ³ããã ID ãšãã®ã©ãã«ã®ããããäœæããŸãã
```py
>>> id2label = {
... 0: "O",
... 1: "B-corporation",
... 2: "I-corporation",
... 3: "B-creative-work",
... 4: "I-creative-work",
... 5: "B-group",
... 6: "I-group",
... 7: "B-location",
... 8: "I-location",
... 9: "B-person",
... 10: "I-person",
... 11: "B-product",
... 12: "I-product",
... }
>>> label2id = {
... "O": 0,
... "B-corporation": 1,
... "I-corporation": 2,
... "B-creative-work": 3,
... "I-creative-work": 4,
... "B-group": 5,
... "I-group": 6,
... "B-location": 7,
... "I-location": 8,
... "B-person": 9,
... "I-person": 10,
... "B-product": 11,
... "I-product": 12,
... }
```
<frameworkcontent>
<pt>
<Tip>
[`Trainer`] ã䜿çšããã¢ãã«ã®åŸ®èª¿æŽã«æ
£ããŠããªãå Žåã¯ã[ãã](../training#train-with-pytorch-trainer) ã®åºæ¬çãªãã¥ãŒããªã¢ã«ãã芧ãã ããã
</Tip>
ããã§ã¢ãã«ã®ãã¬ãŒãã³ã°ãéå§ããæºåãæŽããŸããã [`AutoModelForTokenClassification`] ã䜿çšããŠãäºæãããã©ãã«ã®æ°ãšã©ãã« ãããã³ã°ãæå®ã㊠DistilBERT ãèªã¿èŸŒã¿ãŸãã
```py
>>> from transformers import AutoModelForTokenClassification, TrainingArguments, Trainer
>>> model = AutoModelForTokenClassification.from_pretrained(
... "distilbert-base-uncased", num_labels=13, id2label=id2label, label2id=label2id
... )
```
ãã®æç¹ã§æ®ã£ãŠããã¹ããã㯠3 ã€ã ãã§ãã
1. [`TrainingArguments`] ã§ãã¬ãŒãã³ã° ãã€ããŒãã©ã¡ãŒã¿ãå®çŸ©ããŸããå¯äžã®å¿
é ãã©ã¡ãŒã¿ã¯ãã¢ãã«ã®ä¿åå Žæãæå®ãã `output_dir` ã§ãã `push_to_hub=True`ãèšå®ããŠããã®ã¢ãã«ãããã«ããã·ã¥ããŸã (ã¢ãã«ãã¢ããããŒãããã«ã¯ãHugging Face ã«ãµã€ã³ã€ã³ããå¿
èŠããããŸã)ãåãšããã¯ã®çµäºæã«ã[`Trainer`] ã¯é£ç¶ã¹ã³ã¢ãè©äŸ¡ãããã¬ãŒãã³ã° ãã§ãã¯ãã€ã³ããä¿åããŸãã
2. ãã¬ãŒãã³ã°åŒæ°ããã¢ãã«ãããŒã¿ã»ãããããŒã¯ãã€ã¶ãŒãããŒã¿ç
§ååšãããã³ `compute_metrics` 颿°ãšãšãã« [`Trainer`] ã«æž¡ããŸãã
3. [`~Trainer.train`] ãåŒã³åºããŠã¢ãã«ã埮調æŽããŸãã
```py
>>> training_args = TrainingArguments(
... output_dir="my_awesome_wnut_model",
... learning_rate=2e-5,
... per_device_train_batch_size=16,
... per_device_eval_batch_size=16,
... num_train_epochs=2,
... weight_decay=0.01,
... evaluation_strategy="epoch",
... save_strategy="epoch",
... load_best_model_at_end=True,
... push_to_hub=True,
... )
>>> trainer = Trainer(
... model=model,
... args=training_args,
... train_dataset=tokenized_wnut["train"],
... eval_dataset=tokenized_wnut["test"],
... tokenizer=tokenizer,
... data_collator=data_collator,
... compute_metrics=compute_metrics,
... )
>>> trainer.train()
```
ãã¬ãŒãã³ã°ãå®äºãããã [`~transformers.Trainer.push_to_hub`] ã¡ãœããã䜿çšããŠã¢ãã«ãããã«å
±æãã誰ããã¢ãã«ã䜿çšã§ããããã«ããŸãã
```py
>>> trainer.push_to_hub()
```
</pt>
<tf>
<Tip>
Keras ã䜿çšããã¢ãã«ã®åŸ®èª¿æŽã«æ
£ããŠããªãå Žåã¯ã[ãã¡ã](../training#train-a-tensorflow-model-with-keras) ã®åºæ¬çãªãã¥ãŒããªã¢ã«ãã芧ãã ããã
</Tip>
TensorFlow ã§ã¢ãã«ã埮調æŽããã«ã¯ããªããã£ãã€ã¶ãŒé¢æ°ãåŠç¿çã¹ã±ãžã¥ãŒã«ãããã³ããã€ãã®ãã¬ãŒãã³ã° ãã€ããŒãã©ã¡ãŒã¿ãŒãã»ããã¢ããããããšããå§ããŸãã
```py
>>> from transformers import create_optimizer
>>> batch_size = 16
>>> num_train_epochs = 3
>>> num_train_steps = (len(tokenized_wnut["train"]) // batch_size) * num_train_epochs
>>> optimizer, lr_schedule = create_optimizer(
... init_lr=2e-5,
... num_train_steps=num_train_steps,
... weight_decay_rate=0.01,
... num_warmup_steps=0,
... )
```
次ã«ã[`TFAutoModelForTokenClassification`] ã䜿çšããŠãäºæãããã©ãã«ã®æ°ãšã©ãã« ãããã³ã°ãæå®ã㊠DistilBERT ãããŒãã§ããŸãã
```py
>>> from transformers import TFAutoModelForTokenClassification
>>> model = TFAutoModelForTokenClassification.from_pretrained(
... "distilbert-base-uncased", num_labels=13, id2label=id2label, label2id=label2id
... )
```
[`~transformers.TFPreTrainedModel.prepare_tf_dataset`] ã䜿çšããŠãããŒã¿ã»ããã `tf.data.Dataset` 圢åŒã«å€æããŸãã
```py
>>> tf_train_set = model.prepare_tf_dataset(
... tokenized_wnut["train"],
... shuffle=True,
... batch_size=16,
... collate_fn=data_collator,
... )
>>> tf_validation_set = model.prepare_tf_dataset(
... tokenized_wnut["validation"],
... shuffle=False,
... batch_size=16,
... collate_fn=data_collator,
... )
```
[`compile`](https://keras.io/api/models/model_training_apis/#compile-method) ã䜿çšããŠãã¬ãŒãã³ã°çšã®ã¢ãã«ãèšå®ããŸãã Transformers ã¢ãã«ã«ã¯ãã¹ãŠããã©ã«ãã®ã¿ã¹ã¯é¢é£ã®æå€±é¢æ°ããããããæ¬¡ã®å Žåãé€ããæå€±é¢æ°ãæå®ããå¿
èŠã¯ãªãããšã«æ³šæããŠãã ããã
```py
>>> import tensorflow as tf
>>> model.compile(optimizer=optimizer) # No loss argument!
```
ãã¬ãŒãã³ã°ãéå§ããåã«ã»ããã¢ããããæåŸã® 2 ã€ã®ããšã¯ãäºæž¬ããé£ç¶ã¹ã³ã¢ãèšç®ããããšãšãã¢ãã«ãããã«ããã·ã¥ããæ¹æ³ãæäŸããããšã§ããã©ã¡ãã [Keras ã³ãŒã«ããã¯](../main_classes/keras_callbacks) ã䜿çšããŠè¡ãããŸãã
`compute_metrics` 颿°ã [`~transformers.KerasMetricCallback`] ã«æž¡ããŸãã
```py
>>> from transformers.keras_callbacks import KerasMetricCallback
>>> metric_callback = KerasMetricCallback(metric_fn=compute_metrics, eval_dataset=tf_validation_set)
```
[`~transformers.PushToHubCallback`] ã§ã¢ãã«ãšããŒã¯ãã€ã¶ãŒãããã·ã¥ããå Žæãæå®ããŸãã
```py
>>> from transformers.keras_callbacks import PushToHubCallback
>>> push_to_hub_callback = PushToHubCallback(
... output_dir="my_awesome_wnut_model",
... tokenizer=tokenizer,
... )
```
次ã«ãã³ãŒã«ããã¯ããŸãšããŠãã³ãã«ããŸãã
```py
>>> callbacks = [metric_callback, push_to_hub_callback]
```
ã€ãã«ãã¢ãã«ã®ãã¬ãŒãã³ã°ãéå§ããæºåãæŽããŸããããã¬ãŒãã³ã°ããã³æ€èšŒããŒã¿ã»ããããšããã¯æ°ãã³ãŒã«ããã¯ãæå®ã㊠[`fit`](https://keras.io/api/models/model_training_apis/#fit-method) ãåŒã³åºããã¢ãã«ã埮調æŽããŸãã
```py
>>> model.fit(x=tf_train_set, validation_data=tf_validation_set, epochs=3, callbacks=callbacks)
```
ãã¬ãŒãã³ã°ãå®äºãããšãã¢ãã«ã¯èªåçã«ããã«ã¢ããããŒãããã誰ã§ã䜿çšã§ããããã«ãªããŸãã
</tf>
</frameworkcontent>
<Tip>
ããŒã¯ã³åé¡ã®ã¢ãã«ã埮調æŽããæ¹æ³ã®ãã詳现ãªäŸã«ã€ããŠã¯ã察å¿ããã»ã¯ã·ã§ã³ãåç
§ããŠãã ããã
[PyTorch ããŒãããã¯](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/token_classification.ipynb)
ãŸã㯠[TensorFlow ããŒãããã¯](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/token_classification-tf.ipynb)ã
</Tip>
## Inference
ã¢ãã«ã埮調æŽããã®ã§ããããæšè«ã«äœ¿çšã§ããããã«ãªããŸããã
æšè«ãå®è¡ãããããã¹ããããã€ãååŸããŸãã
```py
>>> text = "The Golden State Warriors are an American professional basketball team based in San Francisco."
```
æšè«çšã«åŸ®èª¿æŽãããã¢ãã«ã詊ãæãç°¡åãªæ¹æ³ã¯ãããã [`pipeline`] ã§äœ¿çšããããšã§ããã¢ãã«ã䜿çšã㊠NER ã®`pipeline`ãã€ã³ã¹ã¿ã³ã¹åããããã¹ããããã«æž¡ããŸãã
```py
>>> from transformers import pipeline
>>> classifier = pipeline("ner", model="stevhliu/my_awesome_wnut_model")
>>> classifier(text)
[{'entity': 'B-location',
'score': 0.42658573,
'index': 2,
'word': 'golden',
'start': 4,
'end': 10},
{'entity': 'I-location',
'score': 0.35856336,
'index': 3,
'word': 'state',
'start': 11,
'end': 16},
{'entity': 'B-group',
'score': 0.3064001,
'index': 4,
'word': 'warriors',
'start': 17,
'end': 25},
{'entity': 'B-location',
'score': 0.65523505,
'index': 13,
'word': 'san',
'start': 80,
'end': 83},
{'entity': 'B-location',
'score': 0.4668663,
'index': 14,
'word': 'francisco',
'start': 84,
'end': 93}]
```
å¿
èŠã«å¿ããŠã`pipeline`ã®çµæãæåã§è€è£œããããšãã§ããŸãã
<frameworkcontent>
<pt>
ããã¹ããããŒã¯ã³åã㊠PyTorch ãã³ãœã«ãè¿ããŸãã
```py
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("stevhliu/my_awesome_wnut_model")
>>> inputs = tokenizer(text, return_tensors="pt")
```
å
¥åãã¢ãã«ã«æž¡ãã`logits`ãè¿ããŸãã
```py
>>> from transformers import AutoModelForTokenClassification
>>> model = AutoModelForTokenClassification.from_pretrained("stevhliu/my_awesome_wnut_model")
>>> with torch.no_grad():
... logits = model(**inputs).logits
```
æãé«ã確çã§ã¯ã©ã¹ãååŸããã¢ãã«ã® `id2label` ãããã³ã°ã䜿çšããŠãããããã¹ã ã©ãã«ã«å€æããŸãã
```py
>>> predictions = torch.argmax(logits, dim=2)
>>> predicted_token_class = [model.config.id2label[t.item()] for t in predictions[0]]
>>> predicted_token_class
['O',
'O',
'B-location',
'I-location',
'B-group',
'O',
'O',
'O',
'O',
'O',
'O',
'O',
'O',
'B-location',
'B-location',
'O',
'O']
```
</pt>
<tf>
ããã¹ããããŒã¯ã³åããTensorFlow ãã³ãœã«ãè¿ããŸãã
```py
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("stevhliu/my_awesome_wnut_model")
>>> inputs = tokenizer(text, return_tensors="tf")
```
å
¥åãã¢ãã«ã«æž¡ãã`logits`ãè¿ããŸãã
```py
>>> from transformers import TFAutoModelForTokenClassification
>>> model = TFAutoModelForTokenClassification.from_pretrained("stevhliu/my_awesome_wnut_model")
>>> logits = model(**inputs).logits
```
æãé«ã確çã§ã¯ã©ã¹ãååŸããã¢ãã«ã® `id2label` ãããã³ã°ã䜿çšããŠãããããã¹ã ã©ãã«ã«å€æããŸãã
```py
>>> predicted_token_class_ids = tf.math.argmax(logits, axis=-1)
>>> predicted_token_class = [model.config.id2label[t] for t in predicted_token_class_ids[0].numpy().tolist()]
>>> predicted_token_class
['O',
'O',
'B-location',
'I-location',
'B-group',
'O',
'O',
'O',
'O',
'O',
'O',
'O',
'O',
'B-location',
'B-location',
'O',
'O']
```
</tf>
</frameworkcontent>
| 0 |
hf_public_repos/transformers/docs/source/ja | hf_public_repos/transformers/docs/source/ja/tasks/sequence_classification.md | <!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Sequence classification
[[open-in-colab]]
<Youtube id="dKE8SIt9C-w"/>
ã»ãã³ãã£ã㯠ã»ã°ã¡ã³ããŒã·ã§ã³ã§ã¯ãç»åã®åã
ã®ãã¯ã»ã«ã«ã©ãã«ãŸãã¯ã¯ã©ã¹ãå²ãåœãŠãŸããã»ã°ã¡ã³ããŒã·ã§ã³ã«ã¯ããã€ãã®ã¿ã€ãããããŸãããã»ãã³ãã£ã㯠ã»ã°ã¡ã³ããŒã·ã§ã³ã®å Žåãåããªããžã§ã¯ãã®äžæã®ã€ã³ã¹ã¿ã³ã¹éã®åºå¥ã¯è¡ãããŸãããäž¡æ¹ã®ãªããžã§ã¯ãã«åãã©ãã«ãä»ããããŸã (ããšãã°ããcar-1ããšãcar-2ãã®ä»£ããã«ãcarã)ãã»ãã³ãã£ã㯠ã»ã°ã¡ã³ããŒã·ã§ã³ã®äžè¬çãªçŸå®äžçã®ã¢ããªã±ãŒã·ã§ã³ã«ã¯ãæ©è¡è
ãéèŠãªäº€éæ
å ±ãèå¥ããããã®èªåé転è»ã®ãã¬ãŒãã³ã°ãå»çç»åå
ã®çްèãšç°åžžã®èå¥ãè¡æç»åããã®ç°å¢å€åã®ç£èŠãªã©ãå«ãŸããŸãã
ãã®ã¬ã€ãã§ã¯ãæ¬¡ã®æ¹æ³ã説æããŸãã
1. [SceneParse150](https://huggingface.co/datasets/scene_parse_150) ããŒã¿ã»ããã® [SegFormer](https://huggingface.co/docs/transformers/main/en/model_doc/segformer#segformer) ã埮調æŽããŸãã
2. 埮調æŽããã¢ãã«ãæšè«ã«äœ¿çšããŸãã
<Tip>
ãã®ãã¥ãŒããªã¢ã«ã§èª¬æããã¿ã¹ã¯ã¯ã次ã®ã¢ãã« ã¢ãŒããã¯ãã£ã§ãµããŒããããŠããŸãã
<!--This tip is automatically generated by `make fix-copies`, do not fill manually!-->
[BEiT](../model_doc/beit), [Data2VecVision](../model_doc/data2vec-vision), [DPT](../model_doc/dpt), [MobileNetV2](../model_doc/mobilenet_v2), [MobileViT](../model_doc/mobilevit), [MobileViTV2](../model_doc/mobilevitv2), [SegFormer](../model_doc/segformer), [UPerNet](../model_doc/upernet)
<!--End of the generated tip-->
</Tip>
å§ããåã«ãå¿
èŠãªã©ã€ãã©ãªããã¹ãŠã€ã³ã¹ããŒã«ãããŠããããšã確èªããŠãã ããã
```bash
pip install -q datasets transformers evaluate
```
ã¢ãã«ãã¢ããããŒãããŠã³ãã¥ããã£ãšå
±æã§ããããã«ãHugging Face ã¢ã«ãŠã³ãã«ãã°ã€ã³ããããšããå§ãããŸããããã³ããã衚瀺ãããããããŒã¯ã³ãå
¥åããŠãã°ã€ã³ããŸãã
```py
>>> from huggingface_hub import notebook_login
>>> notebook_login()
```
## Load SceneParse150 dataset
ãŸããSceneParse150 ããŒã¿ã»ããã®å°ãããµãã»ããã ð€ ããŒã¿ã»ãã ã©ã€ãã©ãªããèªã¿èŸŒã¿ãŸããããã«ãããå®å
šãªããŒã¿ã»ããã®ãã¬ãŒãã³ã°ã«ããã«æéãè²»ããåã«ãå®éšããŠãã¹ãŠãæ©èœããããšã確èªããæ©äŒãåŸãããŸãã
```py
>>> from datasets import load_dataset
>>> ds = load_dataset("scene_parse_150", split="train[:50]")
```
[`~datasets.Dataset.train_test_split`] ã¡ãœããã䜿çšããŠãããŒã¿ã»ããã® `train` åå²ããã¬ã€ã³ ã»ãããšãã¹ã ã»ããã«åå²ããŸãã
```py
>>> ds = ds.train_test_split(test_size=0.2)
>>> train_ds = ds["train"]
>>> test_ds = ds["test"]
```
次ã«ãäŸãèŠãŠã¿ãŸãããã
```py
>>> train_ds[0]
{'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=512x683 at 0x7F9B0C201F90>,
'annotation': <PIL.PngImagePlugin.PngImageFile image mode=L size=512x683 at 0x7F9B0C201DD0>,
'scene_category': 368}
```
- `image`: ã·ãŒã³ã® PIL ã€ã¡ãŒãžã
- `annotation`: ã»ã°ã¡ã³ããŒã·ã§ã³ ãããã® PIL ã€ã¡ãŒãžãã¢ãã«ã®ã¿ãŒã²ããã§ããããŸãã
- `scene_category`: ããããã³ããããªãã£ã¹ããªã©ã®ç»åã·ãŒã³ã説æããã«ããŽãª IDããã®ã¬ã€ãã§ã¯ããimageããšãannotationãã®ã¿ãå¿
èŠã«ãªããŸããã©ã¡ãã PIL ã€ã¡ãŒãžã§ãã
ãŸããã©ãã« ID ãã©ãã« ã¯ã©ã¹ã«ãããããèŸæžãäœæããããšãã§ããŸããããã¯ãåŸã§ã¢ãã«ãèšå®ãããšãã«åœ¹ç«ã¡ãŸãããããããããã³ã°ãããŠã³ããŒããã`id2label` ããã³ `label2id` ãã£ã¯ã·ã§ããªãäœæããŸãã
```py
>>> import json
>>> from huggingface_hub import cached_download, hf_hub_url
>>> repo_id = "huggingface/label-files"
>>> filename = "ade20k-id2label.json"
>>> id2label = json.load(open(cached_download(hf_hub_url(repo_id, filename, repo_type="dataset")), "r"))
>>> id2label = {int(k): v for k, v in id2label.items()}
>>> label2id = {v: k for k, v in id2label.items()}
>>> num_labels = len(id2label)
```
## Preprocess
次ã®ã¹ãããã§ã¯ãSegFormer ç»åããã»ããµãããŒãããŠãã¢ãã«ã®ç»åãšæ³šéãæºåããŸãããã®ããŒã¿ã»ããã®ãããªäžéšã®ããŒã¿ã»ããã¯ãããã¯ã°ã©ãŠã³ã ã¯ã©ã¹ãšããŠãŒãã€ã³ããã¯ã¹ã䜿çšããŸãããã ããå®éã«ã¯èæ¯ã¯ã©ã¹ã¯ 150 åã®ã¯ã©ã¹ã«å«ãŸããŠããªãããã`reduce_labels=True`ãèšå®ããŠãã¹ãŠã®ã©ãã«ãã 1 ã€ãåŒãå¿
èŠããããŸãããŒãã€ã³ããã¯ã¹ã¯ `255` ã«çœ®ãæãããããããSegFormer ã®æå€±é¢æ°ã«ãã£ãŠç¡èŠãããŸãã
```py
>>> from transformers import AutoImageProcessor
>>> checkpoint = "nvidia/mit-b0"
>>> image_processor = AutoImageProcessor.from_pretrained(checkpoint, reduce_labels=True)
```
<frameworkcontent>
<pt>
ã¢ãã«ãéåŠç¿ã«å¯ŸããŠããå
ç¢ã«ããããã«ãç»åããŒã¿ã»ããã«ããã€ãã®ããŒã¿æ¡åŒµãé©çšããã®ãäžè¬çã§ãããã®ã¬ã€ãã§ã¯ã[torchvision](https://pytorch.org) ã® [`ColorJitter`](https://pytorch.org/vision/stable/generated/torchvision.transforms.ColorJitter.html) 颿°ã䜿çšããŸãã /vision/stable/index.html) ã䜿çšããŠç»åã®è²ã®ããããã£ãã©ã³ãã ã«å€æŽããŸãããä»»æã®ç»åã©ã€ãã©ãªã䜿çšããããšãã§ããŸãã
```py
>>> from torchvision.transforms import ColorJitter
>>> jitter = ColorJitter(brightness=0.25, contrast=0.25, saturation=0.25, hue=0.1)
```
次ã«ãã¢ãã«ã®ç»åãšæ³šéãæºåããããã® 2 ã€ã®ååŠç颿°ãäœæããŸãããããã®é¢æ°ã¯ãç»åã`pixel_values`ã«å€æããæ³šéã`labels`ã«å€æããŸãããã¬ãŒãã³ã° ã»ããã®å Žåãç»åãç»åããã»ããµã«æäŸããåã«`jitter`ãé©çšãããŸãããã¹ã ã»ããã®å Žåããã¹ãäžã«ããŒã¿æ¡åŒµãé©çšãããªããããç»åããã»ããµã¯`images`ãåãåã£ãŠæ£èŠåãã`labels` ã®ã¿ãåãåããŸãã
```py
>>> def train_transforms(example_batch):
... images = [jitter(x) for x in example_batch["image"]]
... labels = [x for x in example_batch["annotation"]]
... inputs = image_processor(images, labels)
... return inputs
>>> def val_transforms(example_batch):
... images = [x for x in example_batch["image"]]
... labels = [x for x in example_batch["annotation"]]
... inputs = image_processor(images, labels)
... return inputs
```
ããŒã¿ã»ããå
šäœã«`jitter`ãé©çšããã«ã¯ãð€ Datasets [`~datasets.Dataset.set_transform`] 颿°ã䜿çšããŸãã倿ã¯ãªã³ã¶ãã©ã€ã§é©çšããããããé«éã§æ¶è²»ãããã£ã¹ã¯å®¹éãå°ãªããªããŸãã
```py
>>> train_ds.set_transform(train_transforms)
>>> test_ds.set_transform(val_transforms)
```
</pt>
</frameworkcontent>
<frameworkcontent>
<tf>
ã¢ãã«ãéåŠç¿ã«å¯ŸããŠããå
ç¢ã«ããããã«ãç»åããŒã¿ã»ããã«ããã€ãã®ããŒã¿æ¡åŒµãé©çšããã®ãäžè¬çã§ãã
ãã®ã¬ã€ãã§ã¯ã[`tf.image`](https://www.tensorflow.org/api_docs/python/tf/image) ã䜿çšããŠç»åã®è²ã®ããããã£ãã©ã³ãã ã«å€æŽããŸãããä»»æã®ããããã£ã䜿çšããããšãã§ããŸããç»å
奜ããªå³æžé€šã
2 ã€ã®å¥ã
ã®å€æé¢æ°ãå®çŸ©ããŸãã
- ç»åæ¡åŒµãå«ããã¬ãŒãã³ã° ããŒã¿å€æ
- ð€ Transformers ã®ã³ã³ãã¥ãŒã¿ãŒ ããžã§ã³ ã¢ãã«ã¯ãã£ãã«åªå
ã®ã¬ã€ã¢ãŠããæ³å®ããŠãããããç»åã転眮ããã ãã®æ€èšŒããŒã¿å€æ
```py
>>> import tensorflow as tf
>>> def aug_transforms(image):
... image = tf.keras.utils.img_to_array(image)
... image = tf.image.random_brightness(image, 0.25)
... image = tf.image.random_contrast(image, 0.5, 2.0)
... image = tf.image.random_saturation(image, 0.75, 1.25)
... image = tf.image.random_hue(image, 0.1)
... image = tf.transpose(image, (2, 0, 1))
... return image
>>> def transforms(image):
... image = tf.keras.utils.img_to_array(image)
... image = tf.transpose(image, (2, 0, 1))
... return image
```
次ã«ãã¢ãã«ã®ç»åãšæ³šéã®ããããæºåãã 2 ã€ã®ååŠç颿°ãäœæããŸãããããã®æ©èœãé©çšãããŸã
ç»å倿ãè¡ãã以åã«ããŒãããã `image_processor` ã䜿çšããŠç»åã `pixel_values` ã«å€æãã
`labels`ãžã®æ³šéã `ImageProcessor` ã¯ãç»åã®ãµã€ãºå€æŽãšæ£èŠåãåŠçããŸãã
```py
>>> def train_transforms(example_batch):
... images = [aug_transforms(x.convert("RGB")) for x in example_batch["image"]]
... labels = [x for x in example_batch["annotation"]]
... inputs = image_processor(images, labels)
... return inputs
>>> def val_transforms(example_batch):
... images = [transforms(x.convert("RGB")) for x in example_batch["image"]]
... labels = [x for x in example_batch["annotation"]]
... inputs = image_processor(images, labels)
... return inputs
```
ããŒã¿ã»ããå
šäœã«ååŠç倿ãé©çšããã«ã¯ãð€ Datasets [`~datasets.Dataset.set_transform`] 颿°ã䜿çšããŸãã
倿ã¯ãªã³ã¶ãã©ã€ã§é©çšããããããé«éã§æ¶è²»ãããã£ã¹ã¯å®¹éãå°ãªããªããŸãã
```py
>>> train_ds.set_transform(train_transforms)
>>> test_ds.set_transform(val_transforms)
```
</tf>
</frameworkcontent>
## Evaluate
ãã¬ãŒãã³ã°äžã«ã¡ããªã¯ã¹ãå«ãããšãå€ãã®å Žåãã¢ãã«ã®ããã©ãŒãã³ã¹ãè©äŸ¡ããã®ã«åœ¹ç«ã¡ãŸãã ð€ [Evaluate](https://huggingface.co/docs/evaluate/index) ã©ã€ãã©ãªã䜿çšããŠãè©äŸ¡ã¡ãœããããã°ããããŒãã§ããŸãããã®ã¿ã¹ã¯ã§ã¯ã[Mean Intersection over Union](https://huggingface.co/spaces/evaluate-metric/accuracy) (IoU) ã¡ããªãã¯ãããŒãããŸã (ð€ Evaluate [ã¯ã€ã㯠ãã¢ãŒ](https://huggingface.co) ãåç
§ããŠãã ãã) /docs/evaluate/a_quick_tour) ãåç
§ããŠãã¡ããªã¯ã¹ãããŒãããŠèšç®ããæ¹æ³ã®è©³çްã確èªããŠãã ãã)ã
```py
>>> import evaluate
>>> metric = evaluate.load("mean_iou")
```
次ã«ãã¡ããªã¯ã¹ã [`~evaluate.EvaluationModule.compute`] ãã颿°ãäœæããŸããäºæž¬ã次ã®ããã«å€æããå¿
èŠããããŸã
æåã«ããžãããäœæããæ¬¡ã« [`~evaluate.EvaluationModule.compute`] ãåŒã³åºãåã«ã©ãã«ã®ãµã€ãºã«äžèŽããããã«å圢æããŸãã
<frameworkcontent>
<pt>
```py
>>> import numpy as np
>>> import torch
>>> from torch import nn
>>> def compute_metrics(eval_pred):
... with torch.no_grad():
... logits, labels = eval_pred
... logits_tensor = torch.from_numpy(logits)
... logits_tensor = nn.functional.interpolate(
... logits_tensor,
... size=labels.shape[-2:],
... mode="bilinear",
... align_corners=False,
... ).argmax(dim=1)
... pred_labels = logits_tensor.detach().cpu().numpy()
... metrics = metric.compute(
... predictions=pred_labels,
... references=labels,
... num_labels=num_labels,
... ignore_index=255,
... reduce_labels=False,
... )
... for key, value in metrics.items():
... if type(value) is np.ndarray:
... metrics[key] = value.tolist()
... return metrics
```
</pt>
</frameworkcontent>
<frameworkcontent>
<tf>
```py
>>> def compute_metrics(eval_pred):
... logits, labels = eval_pred
... logits = tf.transpose(logits, perm=[0, 2, 3, 1])
... logits_resized = tf.image.resize(
... logits,
... size=tf.shape(labels)[1:],
... method="bilinear",
... )
... pred_labels = tf.argmax(logits_resized, axis=-1)
... metrics = metric.compute(
... predictions=pred_labels,
... references=labels,
... num_labels=num_labels,
... ignore_index=-1,
... reduce_labels=image_processor.do_reduce_labels,
... )
... per_category_accuracy = metrics.pop("per_category_accuracy").tolist()
... per_category_iou = metrics.pop("per_category_iou").tolist()
... metrics.update({f"accuracy_{id2label[i]}": v for i, v in enumerate(per_category_accuracy)})
... metrics.update({f"iou_{id2label[i]}": v for i, v in enumerate(per_category_iou)})
... return {"val_" + k: v for k, v in metrics.items()}
```
</tf>
</frameworkcontent>
ããã§`compute_metrics`颿°ã®æºåãæŽããŸããããã¬ãŒãã³ã°ãã»ããã¢ãããããšãã«ãã®é¢æ°ã«æ»ããŸãã
## Train
<frameworkcontent>
<pt>
<Tip>
[`Trainer`] ã䜿çšããã¢ãã«ã®åŸ®èª¿æŽã«æ
£ããŠããªãå Žåã¯ã[ãã¡ã](../training#finetune-with-trainer) ã®åºæ¬çãªãã¥ãŒããªã¢ã«ãã芧ãã ããã
</Tip>
ããã§ã¢ãã«ã®ãã¬ãŒãã³ã°ãéå§ããæºåãæŽããŸããã [`AutoModelForSemanticSegmentation`] ã䜿çšã㊠SegFormer ãããŒãããã©ãã« ID ãšã©ãã« ã¯ã©ã¹éã®ãããã³ã°ãã¢ãã«ã«æž¡ããŸãã
```py
>>> from transformers import AutoModelForSemanticSegmentation, TrainingArguments, Trainer
>>> model = AutoModelForSemanticSegmentation.from_pretrained(checkpoint, id2label=id2label, label2id=label2id)
```
ãã®æç¹ã§æ®ã£ãŠããæé ã¯æ¬¡ã® 3 ã€ã ãã§ãã
1. [`TrainingArguments`] ã§ãã¬ãŒãã³ã° ãã€ããŒãã©ã¡ãŒã¿ãå®çŸ©ããŸãã `image` åãåé€ããããããæªäœ¿çšã®åãåé€ããªãããšãéèŠã§ãã `image` åããªããšã`pixel_values` ãäœæã§ããŸããããã®åäœãé²ãã«ã¯ã`remove_unused_columns=False`ãèšå®ããŠãã ãããä»ã«å¿
èŠãªãã©ã¡ãŒã¿ã¯ãã¢ãã«ã®ä¿åå Žæãæå®ãã `output_dir` ã ãã§ãã `push_to_hub=True`ãèšå®ããŠããã®ã¢ãã«ãããã«ããã·ã¥ããŸã (ã¢ãã«ãã¢ããããŒãããã«ã¯ãHugging Face ã«ãµã€ã³ã€ã³ããå¿
èŠããããŸã)ãåãšããã¯ã®çµäºæã«ã[`Trainer`] 㯠IoU ã¡ããªãã¯ãè©äŸ¡ãããã¬ãŒãã³ã° ãã§ãã¯ãã€ã³ããä¿åããŸãã
2. ãã¬ãŒãã³ã°åŒæ°ããã¢ãã«ãããŒã¿ã»ãããããŒã¯ãã€ã¶ãŒãããŒã¿ç
§ååšãããã³ `compute_metrics` 颿°ãšãšãã« [`Trainer`] ã«æž¡ããŸãã
3. [`~Trainer.train`] ãåŒã³åºããŠã¢ãã«ã埮調æŽããŸãã
```py
>>> training_args = TrainingArguments(
... output_dir="segformer-b0-scene-parse-150",
... learning_rate=6e-5,
... num_train_epochs=50,
... per_device_train_batch_size=2,
... per_device_eval_batch_size=2,
... save_total_limit=3,
... evaluation_strategy="steps",
... save_strategy="steps",
... save_steps=20,
... eval_steps=20,
... logging_steps=1,
... eval_accumulation_steps=5,
... remove_unused_columns=False,
... push_to_hub=True,
... )
>>> trainer = Trainer(
... model=model,
... args=training_args,
... train_dataset=train_ds,
... eval_dataset=test_ds,
... compute_metrics=compute_metrics,
... )
>>> trainer.train()
```
ãã¬ãŒãã³ã°ãå®äºãããã [`~transformers.Trainer.push_to_hub`] ã¡ãœããã䜿çšããŠã¢ãã«ãããã«å
±æãã誰ããã¢ãã«ã䜿çšã§ããããã«ããŸãã
```py
>>> trainer.push_to_hub()
```
</pt>
</frameworkcontent>
<frameworkcontent>
<tf>
<Tip>
Keras ã䜿çšããã¢ãã«ã®åŸ®èª¿æŽã«æ
£ããŠããªãå Žåã¯ããŸã [åºæ¬ãã¥ãŒããªã¢ã«](./training#train-a-tensorflow-model-with-keras) ã確èªããŠãã ããã
</Tip>
TensorFlow ã§ã¢ãã«ã埮調æŽããã«ã¯ãæ¬¡ã®æé ã«åŸããŸãã
1. ãã¬ãŒãã³ã°ã®ãã€ããŒãã©ã¡ãŒã¿ãå®çŸ©ãããªããã£ãã€ã¶ãŒãšåŠç¿çã¹ã±ãžã¥ãŒã«ãèšå®ããŸãã
2. äºåãã¬ãŒãã³ã°ãããã¢ãã«ãã€ã³ã¹ã¿ã³ã¹åããŸãã
3. ð€ ããŒã¿ã»ããã `tf.data.Dataset` ã«å€æããŸãã
4. ã¢ãã«ãã³ã³ãã€ã«ããŸãã
5. ã³ãŒã«ããã¯ã远å ããŠã¡ããªã¯ã¹ãèšç®ããã¢ãã«ã ð€ Hub ã«ã¢ããããŒãããŸã
6. `fit()` ã¡ãœããã䜿çšããŠãã¬ãŒãã³ã°ãå®è¡ããŸãã
ãŸãããã€ããŒãã©ã¡ãŒã¿ãŒããªããã£ãã€ã¶ãŒãåŠç¿çã¹ã±ãžã¥ãŒã«ãå®çŸ©ããŸãã
```py
>>> from transformers import create_optimizer
>>> batch_size = 2
>>> num_epochs = 50
>>> num_train_steps = len(train_ds) * num_epochs
>>> learning_rate = 6e-5
>>> weight_decay_rate = 0.01
>>> optimizer, lr_schedule = create_optimizer(
... init_lr=learning_rate,
... num_train_steps=num_train_steps,
... weight_decay_rate=weight_decay_rate,
... num_warmup_steps=0,
... )
```
次ã«ãã©ãã« ãããã³ã°ãšãšãã« [`TFAutoModelForSemanticSegmentation`] ã䜿çšã㊠SegFormer ãããŒããããããã³ã³ãã€ã«ããŸãã
ãªããã£ãã€ã¶ã Transformers ã¢ãã«ã«ã¯ãã¹ãŠããã©ã«ãã®ã¿ã¹ã¯é¢é£ã®æå€±é¢æ°ããããããæ¬¡ã®å Žåãé€ããæå€±é¢æ°ãæå®ããå¿
èŠã¯ãªãããšã«æ³šæããŠãã ããã
```py
>>> from transformers import TFAutoModelForSemanticSegmentation
>>> model = TFAutoModelForSemanticSegmentation.from_pretrained(
... checkpoint,
... id2label=id2label,
... label2id=label2id,
... )
>>> model.compile(optimizer=optimizer) # No loss argument!
```
[`~datasets.Dataset.to_tf_dataset`] ãš [`DefaultDataCollatââor`] ã䜿çšããŠãããŒã¿ã»ããã `tf.data.Dataset` 圢åŒã«å€æããŸãã
```py
>>> from transformers import DefaultDataCollator
>>> data_collator = DefaultDataCollator(return_tensors="tf")
>>> tf_train_dataset = train_ds.to_tf_dataset(
... columns=["pixel_values", "label"],
... shuffle=True,
... batch_size=batch_size,
... collate_fn=data_collator,
... )
>>> tf_eval_dataset = test_ds.to_tf_dataset(
... columns=["pixel_values", "label"],
... shuffle=True,
... batch_size=batch_size,
... collate_fn=data_collator,
... )
```
äºæž¬ãã粟床ãèšç®ããã¢ãã«ã ð€ ããã«ããã·ã¥ããã«ã¯ã[Keras callbacks](../main_classes/keras_callbacks) ã䜿çšããŸãã
`compute_metrics` 颿°ã [`KerasMetricCallback`] ã«æž¡ããŸãã
ãã㊠[`PushToHubCallback`] ã䜿çšããŠã¢ãã«ãã¢ããããŒãããŸãã
```py
>>> from transformers.keras_callbacks import KerasMetricCallback, PushToHubCallback
>>> metric_callback = KerasMetricCallback(
... metric_fn=compute_metrics, eval_dataset=tf_eval_dataset, batch_size=batch_size, label_cols=["labels"]
... )
>>> push_to_hub_callback = PushToHubCallback(output_dir="scene_segmentation", tokenizer=image_processor)
>>> callbacks = [metric_callback, push_to_hub_callback]
```
ã€ãã«ãã¢ãã«ããã¬ãŒãã³ã°ããæºåãæŽããŸããã`fit()`ãã¬ãŒãã³ã°ããã³æ€èšŒããŒã¿ã»ããããšããã¯æ°ã
ã¢ãã«ã埮調æŽããããã®ã³ãŒã«ããã¯:
```py
>>> model.fit(
... tf_train_dataset,
... validation_data=tf_eval_dataset,
... callbacks=callbacks,
... epochs=num_epochs,
... )
```
ããã§ãšãïŒã¢ãã«ã埮調æŽããð€ Hub ã§å
±æããŸãããããã§æšè«ã«äœ¿çšã§ããããã«ãªããŸããã
</tf>
</frameworkcontent>
## Inference
ã¢ãã«ã埮調æŽããã®ã§ããããæšè«ã«äœ¿çšã§ããããã«ãªããŸããã
æšè«ã®ããã«ç»åãããŒãããŸãã
```py
>>> image = ds[0]["image"]
>>> image
```
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/semantic-seg-image.png" alt="Image of bedroom"/>
</div>
<frameworkcontent>
<pt>
æšè«çšã«åŸ®èª¿æŽãããã¢ãã«ã詊ãæãç°¡åãªæ¹æ³ã¯ãããã [`pipeline`] ã§äœ¿çšããããšã§ããã¢ãã«ã䜿çšããŠç»åã»ã°ã¡ã³ããŒã·ã§ã³çšã® `pipeline` ãã€ã³ã¹ã¿ã³ã¹åããããã«ç»åãæž¡ããŸãã
```py
>>> from transformers import pipeline
>>> segmenter = pipeline("image-segmentation", model="my_awesome_seg_model")
>>> segmenter(image)
[{'score': None,
'label': 'wall',
'mask': <PIL.Image.Image image mode=L size=640x427 at 0x7FD5B2062690>},
{'score': None,
'label': 'sky',
'mask': <PIL.Image.Image image mode=L size=640x427 at 0x7FD5B2062A50>},
{'score': None,
'label': 'floor',
'mask': <PIL.Image.Image image mode=L size=640x427 at 0x7FD5B2062B50>},
{'score': None,
'label': 'ceiling',
'mask': <PIL.Image.Image image mode=L size=640x427 at 0x7FD5B2062A10>},
{'score': None,
'label': 'bed ',
'mask': <PIL.Image.Image image mode=L size=640x427 at 0x7FD5B2062E90>},
{'score': None,
'label': 'windowpane',
'mask': <PIL.Image.Image image mode=L size=640x427 at 0x7FD5B2062390>},
{'score': None,
'label': 'cabinet',
'mask': <PIL.Image.Image image mode=L size=640x427 at 0x7FD5B2062550>},
{'score': None,
'label': 'chair',
'mask': <PIL.Image.Image image mode=L size=640x427 at 0x7FD5B2062D90>},
{'score': None,
'label': 'armchair',
'mask': <PIL.Image.Image image mode=L size=640x427 at 0x7FD5B2062E10>}]
```
å¿
èŠã«å¿ããŠã`pipeline` ã®çµæãæåã§è€è£œããããšãã§ããŸããç»åããã»ããµã§ç»åãåŠçãã`pixel_values`ã GPU ã«é
眮ããŸãã
```py
>>> device = torch.device("cuda" if torch.cuda.is_available() else "cpu") # use GPU if available, otherwise use a CPU
>>> encoding = image_processor(image, return_tensors="pt")
>>> pixel_values = encoding.pixel_values.to(device)
```
å
¥åãã¢ãã«ã«æž¡ãããlogitsããè¿ããŸãã
```py
>>> outputs = model(pixel_values=pixel_values)
>>> logits = outputs.logits.cpu()
```
次ã«ãããžãããå
ã®ç»åãµã€ãºã«åã¹ã±ãŒã«ããŸãã
```py
>>> upsampled_logits = nn.functional.interpolate(
... logits,
... size=image.size[::-1],
... mode="bilinear",
... align_corners=False,
... )
>>> pred_seg = upsampled_logits.argmax(dim=1)[0]
```
```
</pt>
</frameworkcontent>
<frameworkcontent>
<tf>
ç»åããã»ããµãããŒãããŠç»åãååŠçããå
¥åã TensorFlow ãã³ãœã«ãšããŠè¿ããŸãã
```py
>>> from transformers import AutoImageProcessor
>>> image_processor = AutoImageProcessor.from_pretrained("MariaK/scene_segmentation")
>>> inputs = image_processor(image, return_tensors="tf")
```
å
¥åãã¢ãã«ã«æž¡ãã`logits`ãè¿ããŸãã
```py
>>> from transformers import TFAutoModelForSemanticSegmentation
>>> model = TFAutoModelForSemanticSegmentation.from_pretrained("MariaK/scene_segmentation")
>>> logits = model(**inputs).logits
```
次ã«ãããžãããå
ã®ç»åãµã€ãºã«åã¹ã±ãŒã«ããã¯ã©ã¹æ¬¡å
ã« argmax ãé©çšããŸãã
```py
>>> logits = tf.transpose(logits, [0, 2, 3, 1])
>>> upsampled_logits = tf.image.resize(
... logits,
... # We reverse the shape of `image` because `image.size` returns width and height.
... image.size[::-1],
... )
>>> pred_seg = tf.math.argmax(upsampled_logits, axis=-1)[0]
```
</tf>
</frameworkcontent>
çµæãèŠèŠåããã«ã¯ã[ããŒã¿ã»ãã ã«ã©ãŒ ãã¬ãã](https://github.com/tensorflow/models/blob/3f1ca33afe3c1631b733ea7e40c294273b9e406d/research/deeplab/utils/get_dataset_colormap.py#L51) ãããããããããããã `ade_palette()` ãšããŠããŒãããŸããã¯ã©ã¹ã RGB å€ã«å€æããŸããæ¬¡ã«ãç»åãšäºæž¬ãããã»ã°ã¡ã³ããŒã·ã§ã³ ããããçµã¿åãããŠããããã§ããŸãã
```py
>>> import matplotlib.pyplot as plt
>>> import numpy as np
>>> color_seg = np.zeros((pred_seg.shape[0], pred_seg.shape[1], 3), dtype=np.uint8)
>>> palette = np.array(ade_palette())
>>> for label, color in enumerate(palette):
... color_seg[pred_seg == label, :] = color
>>> color_seg = color_seg[..., ::-1] # convert to BGR
>>> img = np.array(image) * 0.5 + color_seg * 0.5 # plot the image with the segmentation map
>>> img = img.astype(np.uint8)
>>> plt.figure(figsize=(15, 10))
>>> plt.imshow(img)
>>> plt.show()
```
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/semantic-seg-preds.png" alt="Image of bedroom overlaid with segmentation map"/>
</div>
| 0 |
hf_public_repos/transformers/docs/source/ja | hf_public_repos/transformers/docs/source/ja/tasks/knowledge_distillation_for_image_classification.md | <!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Knowledge Distillation for Computer Vision
[[open-in-colab]]
ç¥èã®èžçã¯ãããå€§èŠæš¡ã§è€éãªã¢ãã« (æåž«) ããããå°èŠæš¡ã§åçŽãªã¢ãã« (çåŸ) ã«ç¥èãäŒéããããã«äœ¿çšãããææ³ã§ããããã¢ãã«ããå¥ã®ã¢ãã«ã«ç¥èãæœåºããã«ã¯ãç¹å®ã®ã¿ã¹ã¯ (ãã®å Žåã¯ç»ååé¡) ã§ãã¬ãŒãã³ã°ãããäºåãã¬ãŒãã³ã°æžã¿æåž«ã¢ãã«ãååŸããç»ååé¡ã§ãã¬ãŒãã³ã°ãããçåŸã¢ãã«ãã©ã³ãã ã«åæåããŸããæ¬¡ã«ãåŠçã¢ãã«ããã¬ãŒãã³ã°ããŠããã®åºåãšæåž«ã®åºåã®å·®ãæå°éã«æããåäœãæš¡å£ããŸãããã㯠[Distilling the Knowledge in a Neural Network by Hinton et al](https://arxiv.org/abs/1503.02531) ã§æåã«å°å
¥ãããŸããããã®ã¬ã€ãã§ã¯ãã¿ã¹ã¯åºæã®ç¥èã®èžçãè¡ããŸããããã«ã¯ [Beans ããŒã¿ã»ãã](https://huggingface.co/datasets/beans) ã䜿çšããŸãã
ãã®ã¬ã€ãã§ã¯ã[埮調æŽããã ViT ã¢ãã«](https://huggingface.co/merve/vit-mobilenet-beans-224) (æåž«ã¢ãã«) ãæœåºã㊠[MobileNet](https://huggingface. co/google/mobilenet_v2_1.4_224) (åŠçã¢ãã«) ð€ Transformers ã® [Trainer API](https://huggingface.co/docs/transformers/en/main_classes/trainer#trainer) ã䜿çšããŸãã
èžçãšããã»ã¹ã®è©äŸ¡ã«å¿
èŠãªã©ã€ãã©ãªãã€ã³ã¹ããŒã«ããŸãããã
```bash
pip install transformers datasets accelerate tensorboard evaluate --upgrade
```
ãã®äŸã§ã¯ãæåž«ã¢ãã«ãšããŠ`merve/beans-vit-224`ã¢ãã«ã䜿çšããŠããŸããããã¯ãBean ããŒã¿ã»ããã«åºã¥ããŠåŸ®èª¿æŽããã`google/vit-base-patch16-224-in21k`ã«åºã¥ãç»ååé¡ã¢ãã«ã§ãããã®ã¢ãã«ãã©ã³ãã ã«åæåããã MobileNetV2 ã«æœåºããŸãã
次ã«ãããŒã¿ã»ãããããŒãããŸãã
```python
from datasets import load_dataset
dataset = load_dataset("beans")
```
ãã®å Žåãåãè§£å床ã§åãåºåãè¿ããããããã©ã¡ãã®ã¢ãã«ã®ç»åããã»ããµã䜿çšã§ããŸãã `dataset`ã®`map()`ã¡ãœããã䜿çšããŠãããŒã¿ã»ããã®ãã¹ãŠã®åå²ã«ååŠçãé©çšããŸãã
```python
from transformers import AutoImageProcessor
teacher_processor = AutoImageProcessor.from_pretrained("merve/beans-vit-224")
def process(examples):
processed_inputs = teacher_processor(examples["image"])
return processed_inputs
processed_datasets = dataset.map(process, batched=True)
```
åºæ¬çã«ãæã
ã¯çåŸã¢ãã«ïŒã©ã³ãã ã«åæåãããMobileNetïŒãæåž«ã¢ãã«ïŒåŸ®èª¿æŽãããããžã§ã³å€æåšïŒãæš¡å£ããããšãæãããããå®çŸããããã«ããŸãæåž«ãšçåŸããããžããåºåãåŸããæ¬¡ã«ãããããã®ãœããã¿ãŒã²ããã®éèŠåºŠãå¶åŸ¡ãããã©ã¡ãŒã¿`temperature`ã§åå²ããã`lambda`ãšåŒã°ãããã©ã¡ãŒã¿ã¯èžçãã¹ã®éèŠåºŠãéãããã®äŸã§ã¯ã`temperature=5`ã`lambda=0.5`ãšãããçåŸãšæåž«ã®éã®çºæ£ãèšç®ããããã«ãKullback-Leiblerçºæ£æå€±ã䜿çšããŸãã2ã€ã®ããŒã¿PãšQãäžãããããšããKLãã€ããŒãžã§ã³ã¹ã¯Qã䜿ã£ãŠPã衚çŸããããã«ã©ãã ãã®äœåãªæ
å ±ãå¿
èŠãã説æããŸãããã2ã€ãåãã§ããã°ãQããPã説æããããã«å¿
èŠãªä»ã®æ
å ±ã¯ãªãã®ã§ããããã®KLãã€ããŒãžã§ã³ã¹ã¯ãŒãã«ãªããŸãã
```python
from transformers import TrainingArguments, Trainer
import torch
import torch.nn as nn
import torch.nn.functional as F
class ImageDistilTrainer(Trainer):
def __init__(self, *args, teacher_model=None, **kwargs):
super().__init__(*args, **kwargs)
self.teacher = teacher_model
self.student = student_model
self.loss_function = nn.KLDivLoss(reduction="batchmean")
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
self.teacher.to(device)
self.teacher.eval()
self.temperature = temperature
self.lambda_param = lambda_param
def compute_loss(self, student, inputs, return_outputs=False):
student_output = self.student(**inputs)
with torch.no_grad():
teacher_output = self.teacher(**inputs)
# Compute soft targets for teacher and student
soft_teacher = F.softmax(teacher_output.logits / self.temperature, dim=-1)
soft_student = F.log_softmax(student_output.logits / self.temperature, dim=-1)
# Compute the loss
distillation_loss = self.loss_function(soft_student, soft_teacher) * (self.temperature ** 2)
# Compute the true label loss
student_target_loss = student_output.loss
# Calculate final loss
loss = (1. - self.lambda_param) * student_target_loss + self.lambda_param * distillation_loss
return (loss, student_output) if return_outputs else loss
```
次ã«ãHugging Face Hub ã«ãã°ã€ã³ããŠã`trainer`ãéããŠã¢ãã«ã Hugging Face Hub ã«ããã·ã¥ã§ããããã«ããŸãã
```python
from huggingface_hub import notebook_login
notebook_login()
```
æåž«ã¢ãã«ãšçåŸã¢ãã«ã§ãã`TrainingArguments`ãèšå®ããŸãããã
```python
from transformers import AutoModelForImageClassification, MobileNetV2Config, MobileNetV2ForImageClassification
training_args = TrainingArguments(
output_dir="my-awesome-model",
num_train_epochs=30,
fp16=True,
logging_dir=f"{repo_name}/logs",
logging_strategy="epoch",
evaluation_strategy="epoch",
save_strategy="epoch",
load_best_model_at_end=True,
metric_for_best_model="accuracy",
report_to="tensorboard",
push_to_hub=True,
hub_strategy="every_save",
hub_model_id=repo_name,
)
num_labels = len(processed_datasets["train"].features["labels"].names)
# initialize models
teacher_model = AutoModelForImageClassification.from_pretrained(
"merve/beans-vit-224",
num_labels=num_labels,
ignore_mismatched_sizes=True
)
# training MobileNetV2 from scratch
student_config = MobileNetV2Config()
student_config.num_labels = num_labels
student_model = MobileNetV2ForImageClassification(student_config)
```
`compute_metrics` 颿°ã䜿çšããŠããã¹ã ã»ããã§ã¢ãã«ãè©äŸ¡ã§ããŸãããã®é¢æ°ã¯ããã¬ãŒãã³ã° ããã»ã¹äžã«ã¢ãã«ã®`accuracy`ãš`f1`ãèšç®ããããã«äœ¿çšãããŸãã
```python
import evaluate
import numpy as np
accuracy = evaluate.load("accuracy")
def compute_metrics(eval_pred):
predictions, labels = eval_pred
acc = accuracy.compute(references=labels, predictions=np.argmax(predictions, axis=1))
return {"accuracy": acc["accuracy"]}
```
å®çŸ©ãããã¬ãŒãã³ã°åŒæ°ã䜿çšããŠ`Trainer`ãåæåããŸããããããŒã¿ç
§åè£
眮ãåæåããŸãã
```python
from transformers import DefaultDataCollator
data_collator = DefaultDataCollator()
trainer = ImageDistilTrainer(
student_model=student_model,
teacher_model=teacher_model,
training_args=training_args,
train_dataset=processed_datasets["train"],
eval_dataset=processed_datasets["validation"],
data_collator=data_collator,
tokenizer=teacher_extractor,
compute_metrics=compute_metrics,
temperature=5,
lambda_param=0.5
)
```
ããã§ã¢ãã«ããã¬ãŒãã³ã°ã§ããããã«ãªããŸããã
```python
trainer.train()
```
ãã¹ã ã»ããã§ã¢ãã«ãè©äŸ¡ã§ããŸãã
```python
trainer.evaluate(processed_datasets["test"])
```
ãã¹ã ã»ããã§ã¯ãã¢ãã«ã®ç²ŸåºŠã¯ 72% ã«éããŸããèžçå¹çã®å¥å
šæ§ãã§ãã¯ãè¡ãããã«ãåããã€ããŒãã©ã¡ãŒã¿ã䜿çšã㊠Bean ããŒã¿ã»ããã§ MobileNet ãæåãããã¬ãŒãã³ã°ãããã¹ã ã»ããã§ 63% ã®ç²ŸåºŠã芳å¯ããŸãããèªè
ã®çæ§ã«ã¯ãããŸããŸãªäºåãã¬ãŒãã³ã°æžã¿æåž«ã¢ãã«ãåŠçã¢ãŒããã¯ãã£ãèžçãã©ã¡ãŒã¿ã詊ããŠããã ãããã®çµæãå ±åããŠããã ããããå§ãããŸããæœåºãããã¢ãã«ã®ãã¬ãŒãã³ã° ãã°ãšãã§ãã¯ãã€ã³ã㯠[ãã®ãªããžããª](https://huggingface.co/merve/vit-mobilenet-beans-224) ã«ãããæåãããã¬ãŒãã³ã°ããã MobileNetV2 ã¯ãã® [ãªããžããª]( https://huggingface.co/merve/resnet-mobilenet-beans-5)ã
| 0 |
hf_public_repos/transformers/docs/source/ja | hf_public_repos/transformers/docs/source/ja/tasks/zero_shot_image_classification.md | <!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Zero-shot image classification
[[open-in-colab]]
ãŒãã·ã§ããç»ååé¡ã¯ã次ã®ã¢ãã«ã䜿çšããŠç»åãããŸããŸãªã«ããŽãªã«åé¡ããã¿ã¹ã¯ã§ãã
ãããã®ç¹å®ã®ã«ããŽãªã®ã©ãã«ä»ãã®äŸãå«ãããŒã¿ã«å¯ŸããŠæç€ºçã«ãã¬ãŒãã³ã°ãããŠããªãã
åŸæ¥ãç»ååé¡ã«ã¯ãã©ãã«ä»ãç»åã®ç¹å®ã®ã»ããã§ã¢ãã«ããã¬ãŒãã³ã°ããå¿
èŠãããããã®ã¢ãã«ã¯æ¬¡ã®ããšãåŠç¿ããŸãã
ç¹å®ã®ç»åã®ç¹åŸŽãã©ãã«ã«ããããã³ã°ãããŸããåé¡ã¿ã¹ã¯ã«ãã®ãããªã¢ãã«ã䜿çšããå¿
èŠãããå Žåã
æ°ããã©ãã«ã®ã»ããã§ã¯ãã¢ãã«ã "å調æŽ" ããããã«åŸ®èª¿æŽãå¿
ââèŠã§ãã
察ç
§çã«ããŒãã·ã§ãããŸãã¯ãªãŒãã³èªåœç»ååé¡ã¢ãã«ã¯ãéåžžãå€§èŠæš¡ãªã·ã¹ãã ã§ãã¬ãŒãã³ã°ããããã«ãã¢ãŒãã« ã¢ãã«ã§ãã
ç»åãšé¢é£ãã説æã®ããŒã¿ã»ããããããã®ã¢ãã«ã¯ããŒãã·ã§ããç»ååé¡ãå«ãå€ãã®äžæµã¿ã¹ã¯ã«äœ¿çšã§ããã調æŽãããèŠèŠèšèªè¡šçŸãåŠç¿ããŸãã
ããã¯ãç»ååé¡ã«å¯Ÿããããæè»ãªã¢ãããŒãã§ãããã¢ãã«ãæ°ãããŸã èŠãããšã®ãªãã«ããŽãªã«äžè¬åã§ããããã«ãªããŸãã
远å ã®ãã¬ãŒãã³ã° ããŒã¿ãå¿
èŠãšããããŠãŒã¶ãŒã¯ã¿ãŒã²ãã ãªããžã§ã¯ãã®èªç±åœ¢åŒã®ããã¹ã説æãå«ãç»åãã¯ãšãªã§ããããã«ãªããŸãã
ãã®ã¬ã€ãã§ã¯ãæ¬¡ã®æ¹æ³ãåŠã³ãŸãã
* ãŒãã·ã§ããç»ååé¡ãã€ãã©ã€ã³ãäœæãã
* æåã§ãŒãã·ã§ããç»åå顿šè«ãå®è¡ããŸã
å§ããåã«ãå¿
èŠãªã©ã€ãã©ãªããã¹ãŠã€ã³ã¹ããŒã«ãããŠããããšã確èªããŠãã ããã
```bash
pip install -q transformers
```
## Zero-shot image classification pipeline
ãŒãã·ã§ããç»ååé¡ããµããŒãããã¢ãã«ã§æšè«ã詊ãæãç°¡åãªæ¹æ³ã¯ã察å¿ãã [`ãã€ãã©ã€ã³`] ã䜿çšããããšã§ãã
[Hugging Face Hub ã®ãã§ãã¯ãã€ã³ã](https://huggingface.co/models?pipeline_tag=zero-shot-image-classification&sort=downloads) ãããã€ãã©ã€ã³ãã€ã³ã¹ã¿ã³ã¹åããŸãã
```python
>>> from transformers import pipeline
>>> checkpoint = "openai/clip-vit-large-patch14"
>>> detector = pipeline(model=checkpoint, task="zero-shot-image-classification")
```
次ã«ãåé¡ãããç»åãéžæããŸãã
```py
>>> from PIL import Image
>>> import requests
>>> url = "https://unsplash.com/photos/g8oS8-82DxI/download?ixid=MnwxMjA3fDB8MXx0b3BpY3x8SnBnNktpZGwtSGt8fHx8fDJ8fDE2NzgxMDYwODc&force=true&w=640"
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> image
```
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/owl.jpg" alt="Photo of an owl"/>
</div>
ç»åãšåè£ãªããžã§ã¯ãã®ã©ãã«ããã€ãã©ã€ã³ã«æž¡ããŸããããã§ã¯ç»åãçŽæ¥æž¡ããŸããä»ã®é©åãªãªãã·ã§ã³
ç»åãžã®ããŒã«ã« ãã¹ãŸãã¯ç»å URL ãå«ããŸãã
åè£ã©ãã«ã¯ããã®äŸã®ããã«åçŽãªåèªã«ããããšãããã説æçãªåèªã«ããããšãã§ããŸãã
```py
>>> predictions = detector(image, candidate_labels=["fox", "bear", "seagull", "owl"])
>>> predictions
[{'score': 0.9996670484542847, 'label': 'owl'},
{'score': 0.000199399160919711, 'label': 'seagull'},
{'score': 7.392891711788252e-05, 'label': 'fox'},
{'score': 5.96074532950297e-05, 'label': 'bear'}]
```
## Zero-shot image classification by hand
ãŒãã·ã§ããç»ååé¡ãã€ãã©ã€ã³ã®äœ¿ç𿹿³ãçè§£ãããšããã§ããŒãã·ã§ãããå®è¡ããæ¹æ³ãèŠãŠã¿ãŸãããã
ç»åãæåã§åé¡ããŸãã
ãŸãã[Hugging Face Hub ã®ãã§ãã¯ãã€ã³ã](https://huggingface.co/models?pipeline_tag=zero-shot-image-classification&sort=downloads) ããã¢ãã«ãšé¢é£ããã»ããµãããŒãããŸãã
ããã§ã¯ãåãšåããã§ãã¯ãã€ã³ãã䜿çšããŸãã
```py
>>> from transformers import AutoProcessor, AutoModelForZeroShotImageClassification
>>> model = AutoModelForZeroShotImageClassification.from_pretrained(checkpoint)
>>> processor = AutoProcessor.from_pretrained(checkpoint)
```
æ°åãå€ããŠãå¥ã®ç»åãæ®ã£ãŠã¿ãŸãããã
```py
>>> from PIL import Image
>>> import requests
>>> url = "https://unsplash.com/photos/xBRQfR2bqNI/download?ixid=MnwxMjA3fDB8MXxhbGx8fHx8fHx8fHwxNjc4Mzg4ODEx&force=true&w=640"
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> image
```
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg" alt="Photo of a car"/>
</div>
ããã»ããµã䜿çšããŠã¢ãã«ã®å
¥åãæºåããŸããããã»ããµãŒã¯ã
ãµã€ãºå€æŽãšæ£èŠåã«ããã¢ãã«ã®ç»åãããã³ããã¹ãå
¥åãåŠçããããŒã¯ãã€ã¶ãŒã
```py
>>> candidate_labels = ["tree", "car", "bike", "cat"]
>>> inputs = processor(images=image, text=candidate_labels, return_tensors="pt", padding=True)
```
å
¥åãã¢ãã«ã«æž¡ããçµæãåŸåŠçããŸãã
```py
>>> import torch
>>> with torch.no_grad():
... outputs = model(**inputs)
>>> logits = outputs.logits_per_image[0]
>>> probs = logits.softmax(dim=-1).numpy()
>>> scores = probs.tolist()
>>> result = [
... {"score": score, "label": candidate_label}
... for score, candidate_label in sorted(zip(probs, candidate_labels), key=lambda x: -x[0])
... ]
>>> result
[{'score': 0.998572, 'label': 'car'},
{'score': 0.0010570387, 'label': 'bike'},
{'score': 0.0003393686, 'label': 'tree'},
{'score': 3.1572064e-05, 'label': 'cat'}]
```
| 0 |
hf_public_repos/transformers/docs/source/ja | hf_public_repos/transformers/docs/source/ja/tasks/visual_question_answering.md | <!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Visual Question Answering
[[open-in-colab]]
Visual Question Answering (VQA) ã¯ãç»åã«åºã¥ããŠèªç±åœ¢åŒã®è³ªåã«çããã¿ã¹ã¯ã§ãã
ãã®ã¿ã¹ã¯ããµããŒãããã¢ãã«ãžã®å
¥åã¯éåžžãç»åãšè³ªåã®çµã¿åããã§ãããåºåã¯
èªç¶èšèªã§è¡šçŸãããçãã
VQA ã®æ³šç®ãã¹ã䜿çšäŸã«ã¯æ¬¡ã®ãããªãã®ããããŸãã
* èŠèŠé害è
åãã®ã¢ã¯ã»ã·ããªã㣠ã¢ããªã±ãŒã·ã§ã³ã
* æè²: è¬çŸ©ãæç§æžã§ç€ºãããŠããèŠèŠçãªè³æã«ã€ããŠè³ªåãæããããããšã VQA ã¯ãã€ã³ã¿ã©ã¯ãã£ããªåç©é€šã®å±ç€ºç©ãå²è·¡ã§ãå©çšã§ããŸãã
* ã«ã¹ã¿ã㌠ãµãŒãã¹ãšé»ååååŒ: VQA ã¯ããŠãŒã¶ãŒã補åã«ã€ããŠè³ªåã§ããããã«ããããšã§ãŠãŒã¶ãŒ ãšã¯ã¹ããªãšã³ã¹ãåäžãããŸãã
* ç»åæ€çŽ¢: VQA ã¢ãã«ã䜿çšããŠãç¹å®ã®ç¹åŸŽãæã€ç»åãæ€çŽ¢ã§ããŸããããšãã°ããŠãŒã¶ãŒã¯ãç¬ã¯ããŸãã?ããšå°ããããšãã§ããŸããäžé£ã®ç»åããç¬ãåã£ãŠãããã¹ãŠã®ç»åãæ€çŽ¢ããŸãã
ãã®ã¬ã€ãã§ã¯ãæ¬¡ã®æ¹æ³ãåŠã³ãŸãã
- [`Graphcore/vqa` ããŒã¿ã»ãã](https://huggingface.co/datasets/Graphcore/vqa) äžã§åé¡ VQA ã¢ãã«ãç¹ã« [ViLT](../model_doc/vilt) ã埮調æŽããŸãã
- 埮調æŽããã ViLT ãæšè«ã«äœ¿çšããŸãã
- BLIP-2 ãªã©ã®çæã¢ãã«ã䜿çšããŠãŒãã·ã§ãã VQA æšè«ãå®è¡ããŸãã
## Fine-tuning ViLT
ViLT ã¢ãã«ã¯ãVision Transformer (ViT) ã«ããã¹ãåã蟌ã¿ãçµã¿èŸŒãã§ãããæå°éã®èšèšãå¯èœã«ããŸãã
èŠèŠãšèšèªã®äºåãã¬ãŒãã³ã° (VLP)ããã®ã¢ãã«ã¯ãããã€ãã®äžæµã¿ã¹ã¯ã«äœ¿çšã§ããŸãã VQA ã¿ã¹ã¯ã®å Žåãåé¡å
head ã¯æäžéš (`[CLS]` ããŒã¯ã³ã®æçµçãªéè¡šç€ºç¶æ
ã®æäžéšã«ããç·åœ¢å±€) ã«é
眮ãããã©ã³ãã ã«åæåãããŸãã
ãããã£ãŠãèŠèŠç質åå¿ç㯠**åé¡åé¡** ãšããŠæ±ãããŸãã
BLIPãBLIP-2ãInstructBLIP ãªã©ã®æè¿ã®ã¢ãã«ã¯ãVQA ãçæã¿ã¹ã¯ãšããŠæ±ããŸãããã®ã¬ã€ãã®åŸåã§ã¯ã
ãŒãã·ã§ãã VQA æšè«ã«ãããã䜿çšããæ¹æ³ã瀺ããŸãã
å§ããåã«ãå¿
èŠãªã©ã€ãã©ãªããã¹ãŠã€ã³ã¹ããŒã«ãããŠããããšã確èªããŠãã ããã
```bash
pip install -q transformers datasets
```
ã¢ãã«ãã³ãã¥ããã£ãšå
±æããããšããå§ãããŸãã Hugging Face ã¢ã«ãŠã³ãã«ãã°ã€ã³ããŠãð€ ããã«ã¢ããããŒãããŸãã
ããã³ããã衚瀺ãããããããŒã¯ã³ãå
¥åããŠãã°ã€ã³ããŸãã
```py
>>> from huggingface_hub import notebook_login
>>> notebook_login()
```
ã¢ãã«ã®ãã§ãã¯ãã€ã³ããã°ããŒãã«å€æ°ãšããŠå®çŸ©ããŸãããã
```py
>>> model_checkpoint = "dandelin/vilt-b32-mlm"
```
## Load the data
説æã®ç®çã§ããã®ã¬ã€ãã§ã¯ã泚éä»ãã®èŠèŠçãªè³ªåã«çãããGraphcore/vqaãããŒã¿ã»ããã®éåžžã«å°ããªãµã³ãã«ã䜿çšããŸãã
å®å
šãªããŒã¿ã»ãã㯠[ð€ Hub](https://huggingface.co/datasets/Graphcore/vqa) ã§èŠã€ããããšãã§ããŸãã
[`Graphcore/vqa` ããŒã¿ã»ãã](https://huggingface.co/datasets/Graphcore/vqa) ã®ä»£ããã«ã
å
¬åŒ [VQA ããŒã¿ã»ãã ããŒãž](https://visualqa.org/download.html) ããåãããŒã¿ãæåã§ååŸããŸãããã©ããŒãããå Žåã¯ã
ã«ã¹ã¿ã ããŒã¿ã䜿çšãããã¥ãŒããªã¢ã«ã§ã¯ã[ç»åããŒã¿ã»ãããäœæãã](https://huggingface.co/docs/datasets/image_dataset#loading-script) æ¹æ³ã確èªããŠãã ããã
ð€ ããŒã¿ã»ããã®ããã¥ã¡ã³ãã®ã¬ã€ãã
æ€èšŒåå²ããæåã® 200 åã®äŸãããŒãããããŒã¿ã»ããã®æ©èœã調ã¹ãŠã¿ãŸãããã
```python
>>> from datasets import load_dataset
>>> dataset = load_dataset("Graphcore/vqa", split="validation[:200]")
>>> dataset
Dataset({
features: ['question', 'question_type', 'question_id', 'image_id', 'answer_type', 'label'],
num_rows: 200
})
```
ããŒã¿ã»ããã®ç¹åŸŽãçè§£ããããã«äŸãèŠãŠã¿ãŸãããã
```py
>>> dataset[0]
{'question': 'Where is he looking?',
'question_type': 'none of the above',
'question_id': 262148000,
'image_id': '/root/.cache/huggingface/datasets/downloads/extracted/ca733e0e000fb2d7a09fbcc94dbfe7b5a30750681d0e965f8e0a23b1c2f98c75/val2014/COCO_val2014_000000262148.jpg',
'answer_type': 'other',
'label': {'ids': ['at table', 'down', 'skateboard', 'table'],
'weights': [0.30000001192092896,
1.0,
0.30000001192092896,
0.30000001192092896]}}
```
ãã®ã¿ã¹ã¯ã«é¢é£ããæ©èœã«ã¯æ¬¡ã®ãã®ããããŸãã
* `question`: ç»åããåçãã質å
* `image_id`: 質åãåç
§ããç»åãžã®ãã¹
* `label`: 泚é
æ®ãã®æ©èœã¯å¿
èŠãªãã®ã§åé€ã§ããŸãã
```py
>>> dataset = dataset.remove_columns(['question_type', 'question_id', 'answer_type'])
```
ã芧ã®ãšããã`label`æ©èœã«ã¯ãããŸããŸãªãã¥ãŒãã³ã»ã¢ãããŒã¿ãŒã«ãã£ãŠåéããããåã質åã«å¯Ÿããè€æ°ã®åç (ããã§ã¯`id`ãšåŒã³ãŸã) ãå«ãŸããŠããŸãã
質åã«å¯Ÿããçãã¯äž»èгçãªãã®ã«ãªãå¯èœæ§ãããããã§ãããã®å Žåãåé¡ã¯ "圌ã¯ã©ããèŠãŠããã®ãïŒ"ãšããããšã§ããäžéšã®äººã
ããã«ã¯ "ããŠã³" ãšããæ³šéãä»ããããä»ã®ãã®ã«ã¯ "ããŒãã«ã§" ãšããæ³šéãä»ããããå¥ã®æ³šéã«ã¯ "ã¹ã±ãŒãããŒã" ãšããæ³šéãä»ããããŸããã
ç»åãèŠãŠãã©ã®çããåºãããèããŠãã ããã
```python
>>> from PIL import Image
>>> image = Image.open(dataset[0]['image_id'])
>>> image
```
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/vqa-example.png" alt="VQA Image Example"/>
</div>
質åãšåçã®ãããŸããã®ããããã®ãããªããŒã¿ã»ããã¯ãã«ãã©ãã«åé¡åé¡ãšããŠæ±ãããŸã (
è€æ°ã®åçãæå¹ã§ããå¯èœæ§ããããŸã)ãããã«ãã¯ã³ããã ãšã³ã³ãŒãããããã¯ãã«ãäœæããã ãã§ã¯ãªãã
泚éå
ã«ç¹å®ã®åçãåºçŸããåæ°ã«åºã¥ããœãã ãšã³ã³ãŒãã£ã³ã°ã
ããšãã°ãäžã®äŸã§ã¯ã"down"ãšããåçãä»ã®åçãããé »ç¹ã«éžæãããããã
ã¹ã³ã¢ (ããŒã¿ã»ããã§ã¯`weight`ãšåŒã°ããŸã) 㯠1.0 ã§ãæ®ãã®åçã®ã¹ã³ã¢ã¯ 1.0 æªæºã§ãã
åŸã§é©åãªåé¡ãããã䜿çšããŠã¢ãã«ãã€ã³ã¹ã¿ã³ã¹åããããã«ã2 ã€ã®èŸæžãäœæããŸãããã
ã©ãã«åãæŽæ°ã«å€æããããŸãã¯ãã®é:
```py
>>> import itertools
>>> labels = [item['ids'] for item in dataset['label']]
>>> flattened_labels = list(itertools.chain(*labels))
>>> unique_labels = list(set(flattened_labels))
>>> label2id = {label: idx for idx, label in enumerate(unique_labels)}
>>> id2label = {idx: label for label, idx in label2id.items()}
```
ãããã³ã°ãã§ããã®ã§ãæååã®åçããã® ID ã«çœ®ãæããããã«ååŠçããã䟿å©ã«ããããã«ããŒã¿ã»ããããã©ããåããããšãã§ããŸãã
```python
>>> def replace_ids(inputs):
... inputs["label"]["ids"] = [label2id[x] for x in inputs["label"]["ids"]]
... return inputs
>>> dataset = dataset.map(replace_ids)
>>> flat_dataset = dataset.flatten()
>>> flat_dataset.features
{'question': Value(dtype='string', id=None),
'image_id': Value(dtype='string', id=None),
'label.ids': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None),
'label.weights': Sequence(feature=Value(dtype='float64', id=None), length=-1, id=None)}
```
## Preprocessing data
次ã®ã¹ãããã§ã¯ãViLT ããã»ããµãããŒãããŠãã¢ãã«ã®ç»åããŒã¿ãšããã¹ã ããŒã¿ãæºåããŸãã
[`ViltProcessor`] ã¯ãBERT ããŒã¯ãã€ã¶ãŒãš ViLT ç»åããã»ããµã䟿å©ãªåäžããã»ããµã«ã©ããããŸãã
```py
>>> from transformers import ViltProcessor
>>> processor = ViltProcessor.from_pretrained(model_checkpoint)
```
ããŒã¿ãååŠçããã«ã¯ã[`ViltProcessor`] ã䜿çšããŠç»åãšè³ªåããšã³ã³ãŒãããå¿
èŠããããŸããããã»ããµãŒã¯äœ¿çšããŸã
[`BertTokenizerFast`] ã䜿çšããŠããã¹ããããŒã¯ã³åããããã¹ã ããŒã¿ã® `input_ids`ã`attention_mask`ãããã³ `token_type_ids` ãäœæããŸãã
ç»åã«é¢ããŠã¯ãããã»ããµã¯ [`ViltImageProcessor`] ãå©çšããŠç»åã®ãµã€ãºå€æŽãšæ£èŠåãè¡ãã`pixel_values` ãš `pixel_mask` ãäœæããŸãã
ãããã®ååŠçã¹ãããã¯ãã¹ãŠå
éšã§è¡ããã`processor`ãåŒã³åºãã ãã§æžã¿ãŸãããã ããããã§ãå¿
èŠãªã®ã¯ã
察象ã®ã©ãã«ãæºåããŸãããã®è¡šçŸã§ã¯ãåèŠçŽ ã¯èããããçã (ã©ãã«) ã«å¯Ÿå¿ããŸããæ£è§£ã®å ŽåãèŠçŽ ã¯ä¿æãããŸãã
ããããã®ã¹ã³ã¢ (éã¿) ãèšå®ãããæ®ãã®èŠçŽ ã¯ 0 ã«èšå®ãããŸãã
次ã®é¢æ°ã¯ãç»åãšè³ªåã« `processor` ãé©çšããäžã§èª¬æããããã«ã©ãã«ããã©ãŒãããããŸãã
```py
>>> import torch
>>> def preprocess_data(examples):
... image_paths = examples['image_id']
... images = [Image.open(image_path) for image_path in image_paths]
... texts = examples['question']
... encoding = processor(images, texts, padding="max_length", truncation=True, return_tensors="pt")
... for k, v in encoding.items():
... encoding[k] = v.squeeze()
... targets = []
... for labels, scores in zip(examples['label.ids'], examples['label.weights']):
... target = torch.zeros(len(id2label))
... for label, score in zip(labels, scores):
... target[label] = score
... targets.append(target)
... encoding["labels"] = targets
... return encoding
```
ããŒã¿ã»ããå
šäœã«ååŠç颿°ãé©çšããã«ã¯ãð€ Datasets [`~datasets.map`] 颿°ã䜿çšããŸãã `map` ãé«éåããã«ã¯ã次ã®ããã«ããŸãã
ããŒã¿ã»ããã®è€æ°ã®èŠçŽ ãäžåºŠã«åŠçããã«ã¯ã`batched=True` ãèšå®ããŸãããã®æç¹ã§ãäžèŠãªåã¯èªç±ã«åé€ããŠãã ããã
```py
>>> processed_dataset = flat_dataset.map(preprocess_data, batched=True, remove_columns=['question','question_type', 'question_id', 'image_id', 'answer_type', 'label.ids', 'label.weights'])
>>> processed_dataset
Dataset({
features: ['input_ids', 'token_type_ids', 'attention_mask', 'pixel_values', 'pixel_mask', 'labels'],
num_rows: 200
})
```
æåŸã®ã¹ããããšããŠã[`DefaultDataCollatââor`] ã䜿çšããŠãµã³ãã«ã®ããããäœæããŸãã
```py
>>> from transformers import DefaultDataCollator
>>> data_collator = DefaultDataCollator()
```
## Train the model
ããã§ã¢ãã«ã®ãã¬ãŒãã³ã°ãéå§ããæºåãæŽããŸããã [`ViltForQuestionAnswering`] ã§ ViLT ãããŒãããŸããã©ãã«ã®æ°ãæå®ããŸã
ã©ãã«ãããã³ã°ãšãšãã«:
```py
>>> from transformers import ViltForQuestionAnswering
>>> model = ViltForQuestionAnswering.from_pretrained(model_checkpoint, num_labels=len(id2label), id2label=id2label, label2id=label2id)
```
ãã®æç¹ã§æ®ã£ãŠããã¹ããã㯠3 ã€ã ãã§ãã
1. [`TrainingArguments`] ã§ãã¬ãŒãã³ã° ãã€ããŒãã©ã¡ãŒã¿ãå®çŸ©ããŸãã
```py
>>> from transformers import TrainingArguments
>>> repo_id = "MariaK/vilt_finetuned_200"
>>> training_args = TrainingArguments(
... output_dir=repo_id,
... per_device_train_batch_size=4,
... num_train_epochs=20,
... save_steps=200,
... logging_steps=50,
... learning_rate=5e-5,
... save_total_limit=2,
... remove_unused_columns=False,
... push_to_hub=True,
... )
```
2. ãã¬ãŒãã³ã°åŒæ°ãã¢ãã«ãããŒã¿ã»ãããããã»ããµãŒãããŒã¿ç
§ååšãšãšãã« [`Trainer`] ã«æž¡ããŸãã
```py
>>> from transformers import Trainer
>>> trainer = Trainer(
... model=model,
... args=training_args,
... data_collator=data_collator,
... train_dataset=processed_dataset,
... tokenizer=processor,
... )
```
3. [`~Trainer.train`] ãåŒã³åºããŠã¢ãã«ã埮調æŽããŸãã
```py
>>> trainer.train()
```
ãã¬ãŒãã³ã°ãå®äºãããã [`~Trainer.push_to_hub`] ã¡ãœããã䜿çšããŠã¢ãã«ãããã«å
±æããð€ ããã§æçµã¢ãã«ãå
±æããŸãã
```py
>>> trainer.push_to_hub()
```
## Inference
ViLT ã¢ãã«ã埮調æŽããð€ Hub ã«ã¢ããããŒãããã®ã§ããããæšè«ã«äœ¿çšã§ããŸãããã£ãšãåçŽãª
æšè«çšã«åŸ®èª¿æŽãããã¢ãã«ãè©Šãæ¹æ³ã¯ãããã [`pipeline`] ã§äœ¿çšããããšã§ãã
```py
>>> from transformers import pipeline
>>> pipe = pipeline("visual-question-answering", model="MariaK/vilt_finetuned_200")
```
ãã®ã¬ã€ãã®ã¢ãã«ã¯ 200 ã®äŸã§ã®ã¿ãã¬ãŒãã³ã°ãããŠãããããå€ããæåŸ
ããªãã§ãã ãããå°ãªããšãããããããã©ããèŠãŠã¿ãŸããã
ããŒã¿ããäœããåŠç¿ããæšè«ã説æããããã«ããŒã¿ã»ããããæåã®äŸãåãåºããŸãã
```py
>>> example = dataset[0]
>>> image = Image.open(example['image_id'])
>>> question = example['question']
>>> print(question)
>>> pipe(image, question, top_k=1)
"Where is he looking?"
[{'score': 0.5498199462890625, 'answer': 'down'}]
```
ããŸãèªä¿¡ããããŸããããã¢ãã«ã¯ç¢ºãã«äœããåŠç¿ããŸãããããå€ãã®äŸãšããé·ããã¬ãŒãã³ã°ãè¡ããšãã¯ããã«è¯ãçµæãåŸãããŸãã
å¿
èŠã«å¿ããŠããã€ãã©ã€ã³ã®çµæãæåã§è€è£œããããšãã§ããŸãã
1. ç»åãšè³ªåãååŸããã¢ãã«ã®ããã»ããµã䜿çšããŠã¢ãã«çšã«æºåããŸãã
2. ã¢ãã«ãéããŠçµæãŸãã¯ååŠçã転éããŸãã
3. ããžãããããæãå¯èœæ§ã®é«ãåçã® ID ãååŸãã`id2label` ã§å®éã®åçãèŠã€ããŸãã
```py
>>> processor = ViltProcessor.from_pretrained("MariaK/vilt_finetuned_200")
>>> image = Image.open(example['image_id'])
>>> question = example['question']
>>> # prepare inputs
>>> inputs = processor(image, question, return_tensors="pt")
>>> model = ViltForQuestionAnswering.from_pretrained("MariaK/vilt_finetuned_200")
>>> # forward pass
>>> with torch.no_grad():
... outputs = model(**inputs)
>>> logits = outputs.logits
>>> idx = logits.argmax(-1).item()
>>> print("Predicted answer:", model.config.id2label[idx])
Predicted answer: down
```
## Zero-shot VQA
以åã®ã¢ãã«ã§ã¯ãVQA ãåé¡ã¿ã¹ã¯ãšããŠæ±ããŸããã BLIPãBLIP-2ãInstructBLIP ã¢ãããŒããªã©ã®äžéšã®æè¿ã®ã¢ãã«
çæã¿ã¹ã¯ãšããŠã® VQAã [BLIP-2](../model_doc/blip-2) ãäŸãšããŠèããŠã¿ãŸããããæ°ããããžã¥ã¢ã«èšèªã®äºåãã¬ãŒãã³ã°ãå°å
¥ããŸãã
äºåã«ãã¬ãŒãã³ã°ãããããžã§ã³ ãšã³ã³ãŒããŒãš LLM ãä»»æã«çµã¿åãããŠäœ¿çšââã§ãããã©ãã€ã (詳现ã«ã€ããŠã¯ã[BLIP-2 ããã°æçš¿](https://huggingface.co/blog/blip-2) ãåç
§)ã
ããã«ãããèŠèŠçãªè³ªåå¿çãå«ãè€æ°ã®èŠèŠèšèªã¿ã¹ã¯ã§æå
端ã®çµæãéæããããšãã§ããŸãã
ãã®ã¢ãã«ã VQA ã«äœ¿çšããæ¹æ³ã説æããŸãããããŸããã¢ãã«ãããŒãããŸããããããã§ã¯ã¢ãã«ãæç€ºçã«éä¿¡ããŸãã
GPU (å©çšå¯èœãªå Žå)ããã㯠[`Trainer`] ãèªåçã«åŠçããããããã¬ãŒãã³ã°æã«äºåã«è¡ãå¿
èŠã¯ãããŸããã§ããã
```py
>>> from transformers import AutoProcessor, Blip2ForConditionalGeneration
>>> import torch
>>> processor = AutoProcessor.from_pretrained("Salesforce/blip2-opt-2.7b")
>>> model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", torch_dtype=torch.float16)
>>> device = "cuda" if torch.cuda.is_available() else "cpu"
>>> model.to(device)
```
ã¢ãã«ã¯ç»åãšããã¹ããå
¥åãšããŠåãåããããVQA ããŒã¿ã»ããã®æåã®äŸãšãŸã£ããåãç»åãšè³ªåã®ãã¢ã䜿çšããŠã¿ãŸãããã
```py
>>> example = dataset[0]
>>> image = Image.open(example['image_id'])
>>> question = example['question']
```
èŠèŠçãªè³ªåå¿çã¿ã¹ã¯ã« BLIP-2 ã䜿çšããã«ã¯ãããã¹ã ããã³ãããç¹å®ã®åœ¢åŒ (`Question: {} Answer:`) ã«åŸãå¿
èŠããããŸãã
```py
>>> prompt = f"Question: {question} Answer:"
```
次ã«ãã¢ãã«ã®ããã»ããµã§ç»å/ããã³ãããååŠçããåŠçãããå
¥åãã¢ãã«ã«æž¡ããåºåããã³ãŒãããå¿
èŠããããŸãã
```py
>>> inputs = processor(image, text=prompt, return_tensors="pt").to(device, torch.float16)
>>> generated_ids = model.generate(**inputs, max_new_tokens=10)
>>> generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0].strip()
>>> print(generated_text)
"He is looking at the crowd"
```
ã芧ã®ãšãããã¢ãã«ã¯çŸ€è¡ãšé¡ã®åã (äžãåããŠãã) ãèªèããŸããããèŠéããŠããããã§ãã
芳客ãã¹ã±ãŒã¿ãŒã®åŸãã«ãããšããäºå®ãããã§ãã人éãæ³šéãä»ããããŒã¿ã»ãããååŸããããšãäžå¯èœãªå Žåã«ã¯ãããã¯
ãã®ã¢ãããŒãã«ãããæçšãªçµæãããã«åŸãããŸãã
| 0 |
hf_public_repos/transformers/docs/source/ja | hf_public_repos/transformers/docs/source/ja/tasks/image_classification.md | <!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Image classification
[[open-in-colab]]
<Youtube id="tjAIM7BOYhw"/>
ç»ååé¡ã§ã¯ãç»åã«ã©ãã«ãŸãã¯ã¯ã©ã¹ãå²ãåœãŠãŸããããã¹ããé³å£°ã®åé¡ãšã¯ç°ãªããå
¥åã¯
ç»åãæ§æãããã¯ã»ã«å€ãæå·ã®æ€åºãªã©ãç»ååé¡ã«ã¯å€ãã®çšéããããŸã
èªç¶çœå®³ã®åŸãäœç©ã®å¥åº·ç¶æ
ãç£èŠããããç
æ°ã®å
åããªããå»çç»åãã¹ã¯ãªãŒãã³ã°ãããããã®ã«åœ¹ç«ã¡ãŸãã
ãã®ã¬ã€ãã§ã¯ãæ¬¡ã®æ¹æ³ã説æããŸãã
1. [Food-101](https://huggingface.co/datasets/food101) ããŒã¿ã»ããã® [ViT](model_doc/vit) ã埮調æŽããŠãç»åå
ã®é£åãåé¡ããŸãã
2. 埮調æŽããã¢ãã«ãæšè«ã«äœ¿çšããŸãã
<Tip>
ãã®ãã¥ãŒããªã¢ã«ã§èª¬æããã¿ã¹ã¯ã¯ã次ã®ã¢ãã« ã¢ãŒããã¯ãã£ã§ãµããŒããããŠããŸãã
<!--This tip is automatically generated by `make fix-copies`, do not fill manually!-->
[BEiT](../model_doc/beit), [BiT](../model_doc/bit), [ConvNeXT](../model_doc/convnext), [ConvNeXTV2](../model_doc/convnextv2), [CvT](../model_doc/cvt), [Data2VecVision](../model_doc/data2vec-vision), [DeiT](../model_doc/deit), [DiNAT](../model_doc/dinat), [DINOv2](../model_doc/dinov2), [EfficientFormer](../model_doc/efficientformer), [EfficientNet](../model_doc/efficientnet), [FocalNet](../model_doc/focalnet), [ImageGPT](../model_doc/imagegpt), [LeViT](../model_doc/levit), [MobileNetV1](../model_doc/mobilenet_v1), [MobileNetV2](../model_doc/mobilenet_v2), [MobileViT](../model_doc/mobilevit), [MobileViTV2](../model_doc/mobilevitv2), [NAT](../model_doc/nat), [Perceiver](../model_doc/perceiver), [PoolFormer](../model_doc/poolformer), [PVT](../model_doc/pvt), [RegNet](../model_doc/regnet), [ResNet](../model_doc/resnet), [SegFormer](../model_doc/segformer), [SwiftFormer](../model_doc/swiftformer), [Swin Transformer](../model_doc/swin), [Swin Transformer V2](../model_doc/swinv2), [VAN](../model_doc/van), [ViT](../model_doc/vit), [ViT Hybrid](../model_doc/vit_hybrid), [ViTMSN](../model_doc/vit_msn)
<!--End of the generated tip-->
</Tip>
å§ããåã«ãå¿
èŠãªã©ã€ãã©ãªããã¹ãŠã€ã³ã¹ããŒã«ãããŠããããšã確èªããŠãã ããã
```bash
pip install transformers datasets evaluate
```
Hugging Face ã¢ã«ãŠã³ãã«ãã°ã€ã³ããŠãã¢ãã«ãã¢ããããŒãããŠã³ãã¥ããã£ãšå
±æããããšããå§ãããŸããããã³ããã衚瀺ãããããããŒã¯ã³ãå
¥åããŠãã°ã€ã³ããŸãã
```py
>>> from huggingface_hub import notebook_login
>>> notebook_login()
```
## Load Food-101 dataset
Datasetsãð€ ããŒã¿ã»ãã ã©ã€ãã©ãªãã Food-101 ããŒã¿ã»ããã®å°ãããµãã»ãããèªã¿èŸŒã¿ãŸããããã«ãããæ¬¡ã®æ©äŒãåŸãããŸã
å®å
šãªããŒã¿ã»ããã®ãã¬ãŒãã³ã°ã«ããã«æéãè²»ããåã«ãå®éšããŠãã¹ãŠãæ©èœããããšã確èªããŠãã ããã
```py
>>> from datasets import load_dataset
>>> food = load_dataset("food101", split="train[:5000]")
```
[`~datasets.Dataset.train_test_split`] ã¡ãœããã䜿çšããŠãããŒã¿ã»ããã® `train` åå²ããã¬ã€ã³ ã»ãããšãã¹ã ã»ããã«åå²ããŸãã
```py
>>> food = food.train_test_split(test_size=0.2)
```
次ã«ãäŸãèŠãŠã¿ãŸãããã
```py
>>> food["train"][0]
{'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=512x512 at 0x7F52AFC8AC50>,
'label': 79}
```
ããŒã¿ã»ããå
ã®åäŸã«ã¯ 2 ã€ã®ãã£ãŒã«ãããããŸãã
- `image`: é£åã® PIL ç»å
- `label`: é£åã®ã©ãã«ã¯ã©ã¹
ã¢ãã«ãã©ãã« ID ããã©ãã«åãååŸããããããããã«ãã©ãã«åããããããèŸæžãäœæããŸãã
æŽæ°ãžã®å€æããŸãã¯ãã®é:
```py
>>> labels = food["train"].features["label"].names
>>> label2id, id2label = dict(), dict()
>>> for i, label in enumerate(labels):
... label2id[label] = str(i)
... id2label[str(i)] = label
```
ããã§ãã©ãã« ID ãã©ãã«åã«å€æã§ããããã«ãªããŸããã
```py
>>> id2label[str(79)]
'prime_rib'
```
## Preprocess
次ã®ã¹ãããã§ã¯ãViT ç»åããã»ããµãããŒãããŠç»åããã³ãœã«ã«åŠçããŸãã
```py
>>> from transformers import AutoImageProcessor
>>> checkpoint = "google/vit-base-patch16-224-in21k"
>>> image_processor = AutoImageProcessor.from_pretrained(checkpoint)
```
<frameworkcontent>
<pt>
ããã€ãã®ç»å倿ãç»åã«é©çšããŠãã¢ãã«ã®éåŠç¿ã«å¯Ÿããå
ç¢æ§ãé«ããŸããããã§ã¯ torchvision ã® [`transforms`](https://pytorch.org/vision/stable/transforms.html) ã¢ãžã¥ãŒã«ã䜿çšããŸãããä»»æã®ç»åã©ã€ãã©ãªã䜿çšããããšãã§ããŸãã
ç»åã®ã©ã³ãã ãªéšåãããªãã³ã°ãããµã€ãºã倿Žããç»åã®å¹³åãšæšæºåå·®ã§æ£èŠåããŸãã
```py
>>> from torchvision.transforms import RandomResizedCrop, Compose, Normalize, ToTensor
>>> normalize = Normalize(mean=image_processor.image_mean, std=image_processor.image_std)
>>> size = (
... image_processor.size["shortest_edge"]
... if "shortest_edge" in image_processor.size
... else (image_processor.size["height"], image_processor.size["width"])
... )
>>> _transforms = Compose([RandomResizedCrop(size), ToTensor(), normalize])
```
次ã«ã倿ãé©çšããç»åã® `pixel_values` (ã¢ãã«ãžã®å
¥å) ãè¿ãååŠç颿°ãäœæããŸãã
```py
>>> def transforms(examples):
... examples["pixel_values"] = [_transforms(img.convert("RGB")) for img in examples["image"]]
... del examples["image"]
... return examples
```
ããŒã¿ã»ããå
šäœã«ååŠç颿°ãé©çšããã«ã¯ãð€ Datasets [`~datasets.Dataset.with_transform`] ã¡ãœããã䜿çšããŸãã倿ã¯ãããŒã¿ã»ããã®èŠçŽ ãèªã¿èŸŒããšãã«ãªã³ã¶ãã©ã€ã§é©çšãããŸãã
```py
>>> food = food.with_transform(transforms)
```
次ã«ã[`DefaultDataCollatââor`] ã䜿çšããŠãµã³ãã«ã®ããããäœæããŸãã ð€ Transformers ã®ä»ã®ããŒã¿ç
§ååšãšã¯ç°ãªãã`DefaultDataCollatââor` ã¯ããã£ã³ã°ãªã©ã®è¿œå ã®ååŠçãé©çšããŸããã
```py
>>> from transformers import DefaultDataCollator
>>> data_collator = DefaultDataCollator()
```
</pt>
</frameworkcontent>
<frameworkcontent>
<tf>
éå°é©åãåé¿ããã¢ãã«ãããå
ç¢ã«ããããã«ãããŒã¿ã»ããã®ãã¬ãŒãã³ã°éšåã«ããŒã¿æ¡åŒµã远å ããŸãã
ããã§ã¯ãKeras ååŠçã¬ã€ã€ãŒã䜿çšããŠãã¬ãŒãã³ã° ããŒã¿ã®å€æ (ããŒã¿æ¡åŒµãå«ã) ãå®çŸ©ããŸãã
æ€èšŒããŒã¿ã®å€æ (äžå€®ã®ããªãã³ã°ããµã€ãºå€æŽãæ£èŠåã®ã¿)ã `tf.image` ãŸãã¯
ä»ã®ã©ã€ãã©ãªã§ãæ§ããŸããã
```py
>>> from tensorflow import keras
>>> from tensorflow.keras import layers
>>> size = (image_processor.size["height"], image_processor.size["width"])
>>> train_data_augmentation = keras.Sequential(
... [
... layers.RandomCrop(size[0], size[1]),
... layers.Rescaling(scale=1.0 / 127.5, offset=-1),
... layers.RandomFlip("horizontal"),
... layers.RandomRotation(factor=0.02),
... layers.RandomZoom(height_factor=0.2, width_factor=0.2),
... ],
... name="train_data_augmentation",
... )
>>> val_data_augmentation = keras.Sequential(
... [
... layers.CenterCrop(size[0], size[1]),
... layers.Rescaling(scale=1.0 / 127.5, offset=-1),
... ],
... name="val_data_augmentation",
... )
```
次ã«ãäžåºŠã« 1 ã€ã®ç»åã§ã¯ãªããç»åã®ãããã«é©åãªå€æãé©çšãã颿°ãäœæããŸãã
```py
>>> import numpy as np
>>> import tensorflow as tf
>>> from PIL import Image
>>> def convert_to_tf_tensor(image: Image):
... np_image = np.array(image)
... tf_image = tf.convert_to_tensor(np_image)
... # `expand_dims()` is used to add a batch dimension since
... # the TF augmentation layers operates on batched inputs.
... return tf.expand_dims(tf_image, 0)
>>> def preprocess_train(example_batch):
... """Apply train_transforms across a batch."""
... images = [
... train_data_augmentation(convert_to_tf_tensor(image.convert("RGB"))) for image in example_batch["image"]
... ]
... example_batch["pixel_values"] = [tf.transpose(tf.squeeze(image)) for image in images]
... return example_batch
... def preprocess_val(example_batch):
... """Apply val_transforms across a batch."""
... images = [
... val_data_augmentation(convert_to_tf_tensor(image.convert("RGB"))) for image in example_batch["image"]
... ]
... example_batch["pixel_values"] = [tf.transpose(tf.squeeze(image)) for image in images]
... return example_batch
```
ð€ ããŒã¿ã»ãã [`~datasets.Dataset.set_transform`] ã䜿çšããŠããã®å Žã§å€æãé©çšããŸãã
```py
food["train"].set_transform(preprocess_train)
food["test"].set_transform(preprocess_val)
```
æåŸã®ååŠçã¹ããããšããŠã`DefaultDataCollatââor`ã䜿çšããŠãµã³ãã«ã®ããããäœæããŸãã ð€ Transformers ã®ä»ã®ããŒã¿ç
§åæ©èœãšã¯ç°ãªãã
`DefaultDataCollatââor` ã¯ãããã£ã³ã°ãªã©ã®è¿œå ã®ååŠçãé©çšããŸããã
```py
>>> from transformers import DefaultDataCollator
>>> data_collator = DefaultDataCollator(return_tensors="tf")
```
</tf>
</frameworkcontent>
## Evaluate
ãã¬ãŒãã³ã°äžã«ã¡ããªã¯ã¹ãå«ãããšãå€ãã®å Žåãã¢ãã«ã®ããã©ãŒãã³ã¹ãè©äŸ¡ããã®ã«åœ¹ç«ã¡ãŸããããã«ããŒãã§ããŸã
ð€ [Evaluate](https://huggingface.co/docs/evaluate/index) ã©ã€ãã©ãªã䜿çšããè©äŸ¡æ¹æ³ããã®ã¿ã¹ã¯ã§ã¯ãããŒãããŸã
[accuracy](https://huggingface.co/spaces/evaluate-metric/accuracy) ææš (詳现ã«ã€ããŠã¯ãð€ è©äŸ¡ [ã¯ã€ã㯠ãã¢ãŒ](https://huggingface.co/docs/evaluate/a_quick_tour) ãåç
§ããŠãã ããã¡ããªã¯ã¹ãããŒãããŠèšç®ããæ¹æ³):
```py
>>> import evaluate
>>> accuracy = evaluate.load("accuracy")
```
次ã«ãäºæž¬ãšã©ãã«ã [`~evaluate.EvaluationModule.compute`] ã«æž¡ããŠç²ŸåºŠãèšç®ãã颿°ãäœæããŸãã
```py
>>> import numpy as np
>>> def compute_metrics(eval_pred):
... predictions, labels = eval_pred
... predictions = np.argmax(predictions, axis=1)
... return accuracy.compute(predictions=predictions, references=labels)
```
ããã§ `compute_metrics`颿°ã®æºåãæŽããŸããããã¬ãŒãã³ã°ãèšå®ãããšãã«ãã®é¢æ°ã«æ»ããŸãã
## Train
<frameworkcontent>
<pt>
<Tip>
[`Trainer`] ã䜿çšããã¢ãã«ã®åŸ®èª¿æŽã«æ
£ããŠããªãå Žåã¯ã[ãã¡ã](../training#train-with-pytorch-trainer) ã®åºæ¬çãªãã¥ãŒããªã¢ã«ãã芧ãã ããã
</Tip>
ããã§ã¢ãã«ã®ãã¬ãŒãã³ã°ãéå§ããæºåãæŽããŸããã [`AutoModelForImageClassification`] ã䜿çšã㊠ViT ãããŒãããŸããã©ãã«ã®æ°ãšäºæ³ãããã©ãã«ã®æ°ãããã³ã©ãã« ãããã³ã°ãæå®ããŸãã
```py
>>> from transformers import AutoModelForImageClassification, TrainingArguments, Trainer
>>> model = AutoModelForImageClassification.from_pretrained(
... checkpoint,
... num_labels=len(labels),
... id2label=id2label,
... label2id=label2id,
... )
```
ãã®æç¹ã§æ®ã£ãŠããã¹ããã㯠3 ã€ã ãã§ãã
1. [`TrainingArguments`] ã§ãã¬ãŒãã³ã° ãã€ããŒãã©ã¡ãŒã¿ãå®çŸ©ããŸãã `image` åãåé€ããããããæªäœ¿çšã®åãåé€ããªãããšãéèŠã§ãã `image` åããªããšã`pixel_values` ãäœæã§ããŸããããã®åäœãé²ãã«ã¯ã`remove_unused_columns=False`ãèšå®ããŠãã ãããä»ã«å¿
èŠãªãã©ã¡ãŒã¿ã¯ãã¢ãã«ã®ä¿åå Žæãæå®ãã `output_dir` ã ãã§ãã `push_to_hub=True`ãèšå®ããŠããã®ã¢ãã«ãããã«ããã·ã¥ããŸã (ã¢ãã«ãã¢ããããŒãããã«ã¯ãHugging Face ã«ãµã€ã³ã€ã³ããå¿
èŠããããŸã)ãåãšããã¯ã®çµäºæã«ã[`Trainer`] ã¯ç²ŸåºŠãè©äŸ¡ãããã¬ãŒãã³ã° ãã§ãã¯ãã€ã³ããä¿åããŸãã
2. ãã¬ãŒãã³ã°åŒæ°ããã¢ãã«ãããŒã¿ã»ãããããŒã¯ãã€ã¶ãŒãããŒã¿ç
§ååšãããã³ `compute_metrics` 颿°ãšãšãã« [`Trainer`] ã«æž¡ããŸãã
3. [`~Trainer.train`] ãåŒã³åºããŠã¢ãã«ã埮調æŽããŸãã
```py
>>> training_args = TrainingArguments(
... output_dir="my_awesome_food_model",
... remove_unused_columns=False,
... evaluation_strategy="epoch",
... save_strategy="epoch",
... learning_rate=5e-5,
... per_device_train_batch_size=16,
... gradient_accumulation_steps=4,
... per_device_eval_batch_size=16,
... num_train_epochs=3,
... warmup_ratio=0.1,
... logging_steps=10,
... load_best_model_at_end=True,
... metric_for_best_model="accuracy",
... push_to_hub=True,
... )
>>> trainer = Trainer(
... model=model,
... args=training_args,
... data_collator=data_collator,
... train_dataset=food["train"],
... eval_dataset=food["test"],
... tokenizer=image_processor,
... compute_metrics=compute_metrics,
... )
>>> trainer.train()
```
ãã¬ãŒãã³ã°ãå®äºãããã [`~transformers.Trainer.push_to_hub`] ã¡ãœããã䜿çšããŠã¢ãã«ãããã«å
±æãã誰ããã¢ãã«ã䜿çšã§ããããã«ããŸãã
```py
>>> trainer.push_to_hub()
```
</pt>
</frameworkcontent>
<frameworkcontent>
<tf>
<Tip>
Keras ã䜿çšããã¢ãã«ã®åŸ®èª¿æŽã«æ
£ããŠããªãå Žåã¯ããŸã [åºæ¬ãã¥ãŒããªã¢ã«](./training#train-a-tensorflow-model-with-keras) ã確èªããŠãã ããã
</Tip>
TensorFlow ã§ã¢ãã«ã埮調æŽããã«ã¯ãæ¬¡ã®æé ã«åŸããŸãã
1. ãã¬ãŒãã³ã°ã®ãã€ããŒãã©ã¡ãŒã¿ãå®çŸ©ãããªããã£ãã€ã¶ãŒãšåŠç¿çã¹ã±ãžã¥ãŒã«ãèšå®ããŸãã
2. äºåãã¬ãŒãã³ã°ãããã¢ãã«ãã€ã³ã¹ã¿ã³ã¹åããŸãã
3. ð€ ããŒã¿ã»ããã `tf.data.Dataset` ã«å€æããŸãã
4. ã¢ãã«ãã³ã³ãã€ã«ããŸãã
5. ã³ãŒã«ããã¯ã远å ãã`fit()` ã¡ãœããã䜿çšããŠãã¬ãŒãã³ã°ãå®è¡ããŸãã
6. ã¢ãã«ã ð€ Hub ã«ã¢ããããŒãããŠã³ãã¥ããã£ãšå
±æããŸãã
ãŸãããã€ããŒãã©ã¡ãŒã¿ãŒããªããã£ãã€ã¶ãŒãåŠç¿çã¹ã±ãžã¥ãŒã«ãå®çŸ©ããŸãã
```py
>>> from transformers import create_optimizer
>>> batch_size = 16
>>> num_epochs = 5
>>> num_train_steps = len(food["train"]) * num_epochs
>>> learning_rate = 3e-5
>>> weight_decay_rate = 0.01
>>> optimizer, lr_schedule = create_optimizer(
... init_lr=learning_rate,
... num_train_steps=num_train_steps,
... weight_decay_rate=weight_decay_rate,
... num_warmup_steps=0,
... )
```
次ã«ãã©ãã« ãããã³ã°ãšãšãã« [`TFAutoModelForImageClassification`] ã䜿çšã㊠ViT ãèªã¿èŸŒã¿ãŸãã
```py
>>> from transformers import TFAutoModelForImageClassification
>>> model = TFAutoModelForImageClassification.from_pretrained(
... checkpoint,
... id2label=id2label,
... label2id=label2id,
... )
```
Convert your datasets to the `tf.data.Dataset` format using the [`~datasets.Dataset.to_tf_dataset`] and your `data_collator`:
```py
>>> # converting our train dataset to tf.data.Dataset
>>> tf_train_dataset = food["train"].to_tf_dataset(
... columns="pixel_values", label_cols="label", shuffle=True, batch_size=batch_size, collate_fn=data_collator
... )
>>> # converting our test dataset to tf.data.Dataset
>>> tf_eval_dataset = food["test"].to_tf_dataset(
... columns="pixel_values", label_cols="label", shuffle=True, batch_size=batch_size, collate_fn=data_collator
... )
```
`compile()` ã䜿çšããŠãã¬ãŒãã³ã°çšã«ã¢ãã«ãèšå®ããŸãã
```py
>>> from tensorflow.keras.losses import SparseCategoricalCrossentropy
>>> loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
>>> model.compile(optimizer=optimizer, loss=loss)
```
äºæž¬ãã粟床ãèšç®ããã¢ãã«ã ð€ ããã«ããã·ã¥ããã«ã¯ã[Keras callbacks](../main_classes/keras_callbacks) ã䜿çšããŸãã
`compute_metrics` 颿°ã [KerasMetricCallback](../main_classes/keras_callbacks#transformers.KerasMetricCallback) ã«æž¡ããŸãã
[PushToHubCallback](../main_classes/keras_callbacks#transformers.PushToHubCallback) ã䜿çšããŠã¢ãã«ãã¢ããããŒãããŸãã
```py
>>> from transformers.keras_callbacks import KerasMetricCallback, PushToHubCallback
>>> metric_callback = KerasMetricCallback(metric_fn=compute_metrics, eval_dataset=tf_eval_dataset)
>>> push_to_hub_callback = PushToHubCallback(
... output_dir="food_classifier",
... tokenizer=image_processor,
... save_strategy="no",
... )
>>> callbacks = [metric_callback, push_to_hub_callback]
```
ã€ãã«ãã¢ãã«ããã¬ãŒãã³ã°ããæºåãæŽããŸããããã¬ãŒãã³ã°ããã³æ€èšŒããŒã¿ã»ããããšããã¯æ°ã
ã¢ãã«ã埮調æŽããããã®ã³ãŒã«ããã¯:
```py
>>> model.fit(tf_train_dataset, validation_data=tf_eval_dataset, epochs=num_epochs, callbacks=callbacks)
Epoch 1/5
250/250 [==============================] - 313s 1s/step - loss: 2.5623 - val_loss: 1.4161 - accuracy: 0.9290
Epoch 2/5
250/250 [==============================] - 265s 1s/step - loss: 0.9181 - val_loss: 0.6808 - accuracy: 0.9690
Epoch 3/5
250/250 [==============================] - 252s 1s/step - loss: 0.3910 - val_loss: 0.4303 - accuracy: 0.9820
Epoch 4/5
250/250 [==============================] - 251s 1s/step - loss: 0.2028 - val_loss: 0.3191 - accuracy: 0.9900
Epoch 5/5
250/250 [==============================] - 238s 949ms/step - loss: 0.1232 - val_loss: 0.3259 - accuracy: 0.9890
```
ããã§ãšãïŒã¢ãã«ã埮調æŽããð€ Hub ã§å
±æããŸãããããã§æšè«ã«äœ¿çšã§ããããã«ãªããŸããã
</tf>
</frameworkcontent>
<Tip>
ç»ååé¡çšã®ã¢ãã«ã埮調æŽããæ¹æ³ã®è©³çްãªäŸã«ã€ããŠã¯ã察å¿ãã [PyTorch ããŒãããã¯](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb)
</Tip>
## Inference
ã¢ãã«ã埮調æŽããã®ã§ããããæšè«ã«äœ¿çšã§ããããã«ãªããŸããã
æšè«ãå®è¡ãããç»åãèªã¿èŸŒã¿ãŸãã
```py
>>> ds = load_dataset("food101", split="validation[:10]")
>>> image = ds["image"][0]
```
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png" alt="image of beignets"/>
</div>
æšè«çšã«åŸ®èª¿æŽãããã¢ãã«ã詊ãæãç°¡åãªæ¹æ³ã¯ãããã [`pipeline`] ã§äœ¿çšããããšã§ããã¢ãã«ã䜿çšããŠç»ååé¡çšã®`pipeline`ãã€ã³ã¹ã¿ã³ã¹åããããã«ç»åãæž¡ããŸãã
```py
>>> from transformers import pipeline
>>> classifier = pipeline("image-classification", model="my_awesome_food_model")
>>> classifier(image)
[{'score': 0.31856709718704224, 'label': 'beignets'},
{'score': 0.015232225880026817, 'label': 'bruschetta'},
{'score': 0.01519392803311348, 'label': 'chicken_wings'},
{'score': 0.013022331520915031, 'label': 'pork_chop'},
{'score': 0.012728818692266941, 'label': 'prime_rib'}]
```
å¿
èŠã«å¿ããŠã`pipeline`ã®çµæãæåã§è€è£œããããšãã§ããŸãã
<frameworkcontent>
<pt>
ç»åããã»ããµãããŒãããŠç»åãååŠçãã`input`ã PyTorch ãã³ãœã«ãšããŠè¿ããŸãã
```py
>>> from transformers import AutoImageProcessor
>>> import torch
>>> image_processor = AutoImageProcessor.from_pretrained("my_awesome_food_model")
>>> inputs = image_processor(image, return_tensors="pt")
```
å
¥åãã¢ãã«ã«æž¡ããããžãããè¿ããŸãã
```py
>>> from transformers import AutoModelForImageClassification
>>> model = AutoModelForImageClassification.from_pretrained("my_awesome_food_model")
>>> with torch.no_grad():
... logits = model(**inputs).logits
```
æãé«ã確çã§äºæž¬ãããã©ãã«ãååŸããã¢ãã«ã® `id2label` ãããã³ã°ã䜿çšããŠã©ãã«ã«å€æããŸãã
```py
>>> predicted_label = logits.argmax(-1).item()
>>> model.config.id2label[predicted_label]
'beignets'
```
</pt>
</frameworkcontent>
<frameworkcontent>
<tf>
ç»åããã»ããµãããŒãããŠç»åãååŠçãã`input`ã TensorFlow ãã³ãœã«ãšããŠè¿ããŸãã
```py
>>> from transformers import AutoImageProcessor
>>> image_processor = AutoImageProcessor.from_pretrained("MariaK/food_classifier")
>>> inputs = image_processor(image, return_tensors="tf")
```
å
¥åãã¢ãã«ã«æž¡ããããžãããè¿ããŸãã
```py
>>> from transformers import TFAutoModelForImageClassification
>>> model = TFAutoModelForImageClassification.from_pretrained("MariaK/food_classifier")
>>> logits = model(**inputs).logits
```
æãé«ã確çã§äºæž¬ãããã©ãã«ãååŸããã¢ãã«ã® `id2label` ãããã³ã°ã䜿çšããŠã©ãã«ã«å€æããŸãã
```py
>>> predicted_class_id = int(tf.math.argmax(logits, axis=-1)[0])
>>> model.config.id2label[predicted_class_id]
'beignets'
```
</tf>
</frameworkcontent>
| 0 |
hf_public_repos/transformers/docs/source/ja | hf_public_repos/transformers/docs/source/ja/tasks/monocular_depth_estimation.md | <!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Monocular depth estimation
åçŒå¥¥è¡ãæšå®ã¯ãã·ãŒã³ã®å¥¥è¡ãæ
å ±ãç»åããäºæž¬ããããšãå«ãã³ã³ãã¥ãŒã¿ãŒ ããžã§ã³ ã¿ã¹ã¯ã§ãã
åäžã®ç»åãèšãæããã°ãã·ãŒã³å
ã®ãªããžã§ã¯ãã®è·é¢ãè·é¢ããæšå®ããããã»ã¹ã§ãã
åäžã«ã¡ã©ã®èŠç¹ã
åçŒå¥¥è¡ãæšå®ã«ã¯ã3D åæ§ç¯ãæ¡åŒµçŸå®ãèªåé転ã
ãããŠããããå·¥åŠãã¢ãã«ããªããžã§ã¯ãéã®è€éãªé¢ä¿ãçè§£ããå¿
èŠããããããããã¯å°é£ãªäœæ¥ã§ãã
ã·ãŒã³ãšããã«å¯Ÿå¿ããæ·±åºŠæ
å ±ïŒç
§ææ¡ä»¶ãªã©ã®èŠå ã®åœ±é¿ãåããå¯èœæ§ããããŸãïŒ
ãªã¯ã«ãŒãžã§ã³ãšãã¯ã¹ãã£ã
<Tip>
ãã®ãã¥ãŒããªã¢ã«ã§èª¬æããã¿ã¹ã¯ã¯ã次ã®ã¢ãã« ã¢ãŒããã¯ãã£ã§ãµããŒããããŠããŸãã
<!--This tip is automatically generated by `make fix-copies`, do not fill manually!-->
[DPT](../model_doc/dpt), [GLPN](../model_doc/glpn)
<!--End of the generated tip-->
</Tip>
ãã®ã¬ã€ãã§ã¯ãæ¬¡ã®æ¹æ³ãåŠã³ãŸãã
* 深床æšå®ãã€ãã©ã€ã³ãäœæãã
* æåã§æ·±åºŠæšå®æšè«ãå®è¡ããŸã
å§ããåã«ãå¿
èŠãªã©ã€ãã©ãªããã¹ãŠã€ã³ã¹ããŒã«ãããŠããããšã確èªããŠãã ããã
```bash
pip install -q transformers
```
## Depth estimation pipeline
深床æšå®ããµããŒãããã¢ãã«ã§æšè«ã詊ãæãç°¡åãªæ¹æ³ã¯ã察å¿ãã [`pipeline`] ã䜿çšããããšã§ãã
[Hugging Face Hub ã®ãã§ãã¯ãã€ã³ã](https://huggingface.co/models?pipeline_tag=Depth-estimation&sort=downloads) ãããã€ãã©ã€ã³ãã€ã³ã¹ã¿ã³ã¹åããŸãã
```py
>>> from transformers import pipeline
>>> checkpoint = "vinvino02/glpn-nyu"
>>> depth_estimator = pipeline("depth-estimation", model=checkpoint)
```
次ã«ãåæããç»åãéžæããŸãã
```py
>>> from PIL import Image
>>> import requests
>>> url = "https://unsplash.com/photos/HwBAsSbPBDU/download?ixid=MnwxMjA3fDB8MXxzZWFyY2h8MzR8fGNhciUyMGluJTIwdGhlJTIwc3RyZWV0fGVufDB8MHx8fDE2Nzg5MDEwODg&force=true&w=640"
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> image
```
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/depth-estimation-example.jpg" alt="Photo of a busy street"/>
</div>
ç»åããã€ãã©ã€ã³ã«æž¡ããŸãã
```py
>>> predictions = depth_estimator(image)
```
ãã€ãã©ã€ã³ã¯ 2 ã€ã®ãšã³ããªãå«ãèŸæžãè¿ããŸããæåã®ãã®ã¯`predicted_ Depth`ãšåŒã°ããæ¬¡ã®å€ãæã€ãã³ãœã«ã§ãã
æ·±ãã¯åãã¯ã»ã«ã®ã¡ãŒãã«åäœã§è¡šãããŸãã
2 çªç®ã®`depth`ã¯ã深床æšå®çµæãèŠèŠåãã PIL ç»åã§ãã
èŠèŠåãããçµæãèŠãŠã¿ãŸãããã
```py
>>> predictions["depth"]
```
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/depth-visualization.png" alt="Depth estimation visualization"/>
</div>
## Depth estimation inference by hand
深床æšå®ãã€ãã©ã€ã³ã®äœ¿ç𿹿³ãçè§£ããã®ã§ãåãçµæãæåã§è€è£œããæ¹æ³ãèŠãŠã¿ãŸãããã
ãŸãã[Hugging Face Hub ã®ãã§ãã¯ãã€ã³ã](https://huggingface.co/models?pipeline_tag=Depth-estimation&sort=downloads) ããã¢ãã«ãšé¢é£ããã»ããµãããŒãããŸãã
ããã§ã¯ãåãšåããã§ãã¯ãã€ã³ãã䜿çšããŸãã
```py
>>> from transformers import AutoImageProcessor, AutoModelForDepthEstimation
>>> checkpoint = "vinvino02/glpn-nyu"
>>> image_processor = AutoImageProcessor.from_pretrained(checkpoint)
>>> model = AutoModelForDepthEstimation.from_pretrained(checkpoint)
```
å¿
èŠãªç»å倿ãåŠçãã`image_processor`ã䜿çšããŠãã¢ãã«ã®ç»åå
¥åãæºåããŸãã
ãµã€ãºå€æŽãæ£èŠåãªã©:
```py
>>> pixel_values = image_processor(image, return_tensors="pt").pixel_values
```
æºåãããå
¥åãã¢ãã«ã«æž¡ããŸãã
```py
>>> import torch
>>> with torch.no_grad():
... outputs = model(pixel_values)
... predicted_depth = outputs.predicted_depth
```
çµæãèŠèŠåããŸãã
```py
>>> import numpy as np
>>> # interpolate to original size
>>> prediction = torch.nn.functional.interpolate(
... predicted_depth.unsqueeze(1),
... size=image.size[::-1],
... mode="bicubic",
... align_corners=False,
... ).squeeze()
>>> output = prediction.numpy()
>>> formatted = (output * 255 / np.max(output)).astype("uint8")
>>> depth = Image.fromarray(formatted)
>>> depth
```
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/depth-visualization.png" alt="Depth estimation visualization"/>
</div>
| 0 |
hf_public_repos/transformers/docs/source/ja | hf_public_repos/transformers/docs/source/ja/tasks/multiple_choice.md | <!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Multiple choice
[[open-in-colab]]
å€è¢éžæã¿ã¹ã¯ã¯è³ªåå¿çã«äŒŒãŠããŸãããããã€ãã®åè£ã®åçãã³ã³ããã¹ããšãšãã«æäŸãããæ£ããåçãéžæããããã«ã¢ãã«ããã¬ãŒãã³ã°ãããç¹ãç°ãªããŸãã
ãã®ã¬ã€ãã§ã¯ãæ¬¡ã®æ¹æ³ã説æããŸãã
1. [SWAG](https://huggingface.co/datasets/swag) ããŒã¿ã»ããã®ãéåžžãæ§æã§ [BERT](https://huggingface.co/bert-base-uncased) ã埮調æŽããŠãæé©ãªããŒã¿ã»ãããéžæããŸãè€æ°ã®éžæè¢ãšäœããã®ã³ã³ããã¹ããèæ
®ããŠåçããŸãã
2. 埮調æŽããã¢ãã«ãæšè«ã«äœ¿çšããŸãã
<Tip>
ãã®ãã¥ãŒããªã¢ã«ã§èª¬æããã¿ã¹ã¯ã¯ã次ã®ã¢ãã« ã¢ãŒããã¯ãã£ã§ãµããŒããããŠããŸãã
<!--This tip is automatically generated by `make fix-copies`, do not fill manually!-->
[ALBERT](../model_doc/albert), [BERT](../model_doc/bert), [BigBird](../model_doc/big_bird), [CamemBERT](../model_doc/camembert), [CANINE](../model_doc/canine), [ConvBERT](../model_doc/convbert), [Data2VecText](../model_doc/data2vec-text), [DeBERTa-v2](../model_doc/deberta-v2), [DistilBERT](../model_doc/distilbert), [ELECTRA](../model_doc/electra), [ERNIE](../model_doc/ernie), [ErnieM](../model_doc/ernie_m), [FlauBERT](../model_doc/flaubert), [FNet](../model_doc/fnet), [Funnel Transformer](../model_doc/funnel), [I-BERT](../model_doc/ibert), [Longformer](../model_doc/longformer), [LUKE](../model_doc/luke), [MEGA](../model_doc/mega), [Megatron-BERT](../model_doc/megatron-bert), [MobileBERT](../model_doc/mobilebert), [MPNet](../model_doc/mpnet), [MRA](../model_doc/mra), [Nezha](../model_doc/nezha), [Nyströmformer](../model_doc/nystromformer), [QDQBert](../model_doc/qdqbert), [RemBERT](../model_doc/rembert), [RoBERTa](../model_doc/roberta), [RoBERTa-PreLayerNorm](../model_doc/roberta-prelayernorm), [RoCBert](../model_doc/roc_bert), [RoFormer](../model_doc/roformer), [SqueezeBERT](../model_doc/squeezebert), [XLM](../model_doc/xlm), [XLM-RoBERTa](../model_doc/xlm-roberta), [XLM-RoBERTa-XL](../model_doc/xlm-roberta-xl), [XLNet](../model_doc/xlnet), [X-MOD](../model_doc/xmod), [YOSO](../model_doc/yoso)
<!--End of the generated tip-->
</Tip>
å§ããåã«ãå¿
èŠãªã©ã€ãã©ãªããã¹ãŠã€ã³ã¹ããŒã«ãããŠããããšã確èªããŠãã ããã
```bash
pip install transformers datasets evaluate
```
ã¢ãã«ãã¢ããããŒãããŠã³ãã¥ããã£ãšå
±æã§ããããã«ãHugging Face ã¢ã«ãŠã³ãã«ãã°ã€ã³ããããšããå§ãããŸããããã³ããã衚瀺ãããããããŒã¯ã³ãå
¥åããŠãã°ã€ã³ããŸãã
```py
>>> from huggingface_hub import notebook_login
>>> notebook_login()
```
## Load SWAG dataset
ãŸããð€ ããŒã¿ã»ãã ã©ã€ãã©ãªãã SWAG ããŒã¿ã»ããã®ãéåžžãæ§æãããŒãããŸãã
```py
>>> from datasets import load_dataset
>>> swag = load_dataset("swag", "regular")
```
次ã«ãäŸãèŠãŠã¿ãŸãããã
```py
>>> swag["train"][0]
{'ending0': 'passes by walking down the street playing their instruments.',
'ending1': 'has heard approaching them.',
'ending2': "arrives and they're outside dancing and asleep.",
'ending3': 'turns the lead singer watches the performance.',
'fold-ind': '3416',
'gold-source': 'gold',
'label': 0,
'sent1': 'Members of the procession walk down the street holding small horn brass instruments.',
'sent2': 'A drum line',
'startphrase': 'Members of the procession walk down the street holding small horn brass instruments. A drum line',
'video-id': 'anetv_jkn6uvmqwh4'}
```
ããã«ã¯ããããã®ãã£ãŒã«ããããããã«èŠããŸãããå®éã¯éåžžã«ç°¡åã§ãã
- `sent1` ãš `sent2`: ãããã®ãã£ãŒã«ãã¯æã®å§ãŸãã瀺ãããã® 2 ã€ãçµã¿åããããš `startphrase` ãã£ãŒã«ããåŸãããŸãã
- `ending`: æã®çµããæ¹ãšããŠèããããçµããæ¹ã瀺åããŸãããæ£ããã®ã¯ 1 ã€ã ãã§ãã
- `label`: æ£ããæã®çµãããèå¥ããŸãã
## Preprocess
次ã®ã¹ãããã§ã¯ãBERT ããŒã¯ãã€ã¶ãŒãããŒãããŠãæã®å§ãŸããš 4 ã€ã®å¯èœãªçµãããåŠçããŸãã
```py
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
```
äœæããååŠç颿°ã¯æ¬¡ã®ããšãè¡ãå¿
èŠããããŸãã
1. `sent1` ãã£ãŒã«ãã®ã³ããŒã 4 ã€äœæããããããã `sent2` ãšçµã¿åãããŠæã®å§ãŸããåçŸããŸãã
2. `sent2` ã 4 ã€ã®å¯èœãªææ«å°Ÿã®ãããããšçµã¿åãããŸãã
3. ããã 2 ã€ã®ãªã¹ããããŒã¯ã³åã§ããããã«ãã©ããåãããã®åŸãåäŸã«å¯Ÿå¿ãã `input_ids`ã`attention_mask`ãããã³ `labels` ãã£ãŒã«ããå«ãŸããããã«éãã©ããåããŸãã
```py
>>> ending_names = ["ending0", "ending1", "ending2", "ending3"]
>>> def preprocess_function(examples):
... first_sentences = [[context] * 4 for context in examples["sent1"]]
... question_headers = examples["sent2"]
... second_sentences = [
... [f"{header} {examples[end][i]}" for end in ending_names] for i, header in enumerate(question_headers)
... ]
... first_sentences = sum(first_sentences, [])
... second_sentences = sum(second_sentences, [])
... tokenized_examples = tokenizer(first_sentences, second_sentences, truncation=True)
... return {k: [v[i : i + 4] for i in range(0, len(v), 4)] for k, v in tokenized_examples.items()}
```
ããŒã¿ã»ããå
šäœã«ååŠç颿°ãé©çšããã«ã¯ãð€ Datasets [`~datasets.Dataset.map`] ã¡ãœããã䜿çšããŸãã `batched=True` ãèšå®ããŠããŒã¿ã»ããã®è€æ°ã®èŠçŽ ãäžåºŠã«åŠçããããšã§ã`map` 颿°ãé«éåã§ããŸãã
```py
tokenized_swag = swag.map(preprocess_function, batched=True)
```
ð€ Transformers ã«ã¯å€è¢éžæçšã®ããŒã¿ç
§ååšããªãããã[`DataCollatââorWithPadding`] ã調æŽããŠãµã³ãã«ã®ããããäœæããå¿
èŠããããŸããããŒã¿ã»ããå
šäœãæå€§é·ãŸã§ããã£ã³ã°ããã®ã§ã¯ãªããç
§åäžã«ãããå
ã®æé·ã®é·ããŸã§æã *åçã«ããã£ã³ã°* ããæ¹ãå¹ççã§ãã
`DataCollatââorForMultipleChoice` ã¯ããã¹ãŠã®ã¢ãã«å
¥åãå¹³åŠåããããã£ã³ã°ãé©çšããŠãçµæãéå¹³åŠåããŸãã
<frameworkcontent>
<pt>
```py
>>> from dataclasses import dataclass
>>> from transformers.tokenization_utils_base import PreTrainedTokenizerBase, PaddingStrategy
>>> from typing import Optional, Union
>>> import torch
>>> @dataclass
... class DataCollatorForMultipleChoice:
... """
... Data collator that will dynamically pad the inputs for multiple choice received.
... """
... tokenizer: PreTrainedTokenizerBase
... padding: Union[bool, str, PaddingStrategy] = True
... max_length: Optional[int] = None
... pad_to_multiple_of: Optional[int] = None
... def __call__(self, features):
... label_name = "label" if "label" in features[0].keys() else "labels"
... labels = [feature.pop(label_name) for feature in features]
... batch_size = len(features)
... num_choices = len(features[0]["input_ids"])
... flattened_features = [
... [{k: v[i] for k, v in feature.items()} for i in range(num_choices)] for feature in features
... ]
... flattened_features = sum(flattened_features, [])
... batch = self.tokenizer.pad(
... flattened_features,
... padding=self.padding,
... max_length=self.max_length,
... pad_to_multiple_of=self.pad_to_multiple_of,
... return_tensors="pt",
... )
... batch = {k: v.view(batch_size, num_choices, -1) for k, v in batch.items()}
... batch["labels"] = torch.tensor(labels, dtype=torch.int64)
... return batch
```
</pt>
<tf>
```py
>>> from dataclasses import dataclass
>>> from transformers.tokenization_utils_base import PreTrainedTokenizerBase, PaddingStrategy
>>> from typing import Optional, Union
>>> import tensorflow as tf
>>> @dataclass
... class DataCollatorForMultipleChoice:
... """
... Data collator that will dynamically pad the inputs for multiple choice received.
... """
... tokenizer: PreTrainedTokenizerBase
... padding: Union[bool, str, PaddingStrategy] = True
... max_length: Optional[int] = None
... pad_to_multiple_of: Optional[int] = None
... def __call__(self, features):
... label_name = "label" if "label" in features[0].keys() else "labels"
... labels = [feature.pop(label_name) for feature in features]
... batch_size = len(features)
... num_choices = len(features[0]["input_ids"])
... flattened_features = [
... [{k: v[i] for k, v in feature.items()} for i in range(num_choices)] for feature in features
... ]
... flattened_features = sum(flattened_features, [])
... batch = self.tokenizer.pad(
... flattened_features,
... padding=self.padding,
... max_length=self.max_length,
... pad_to_multiple_of=self.pad_to_multiple_of,
... return_tensors="tf",
... )
... batch = {k: tf.reshape(v, (batch_size, num_choices, -1)) for k, v in batch.items()}
... batch["labels"] = tf.convert_to_tensor(labels, dtype=tf.int64)
... return batch
```
</tf>
</frameworkcontent>
## Evaluate
ãã¬ãŒãã³ã°äžã«ã¡ããªã¯ã¹ãå«ãããšãå€ãã®å Žåãã¢ãã«ã®ããã©ãŒãã³ã¹ãè©äŸ¡ããã®ã«åœ¹ç«ã¡ãŸãã ð€ [Evaluate](https://huggingface.co/docs/evaluate/index) ã©ã€ãã©ãªã䜿çšããŠãè©äŸ¡ã¡ãœããããã°ããããŒãã§ããŸãããã®ã¿ã¹ã¯ã§ã¯ã[accuracy](https://huggingface.co/spaces/evaluate-metric/accuracy) ã¡ããªã¯ã¹ãèªã¿èŸŒã¿ãŸã (ð€ Evaluate [ã¯ã€ã㯠ãã¢ãŒ](https://huggingface.co/docs/evaluate/a_quick_tour) ãåç
§ããŠãã ãã) ) ã¡ããªã¯ã¹ã®èªã¿èŸŒã¿ãšèšç®æ¹æ³ã®è©³çްã«ã€ããŠã¯ã次ãåç
§ããŠãã ãã)ã
```py
>>> import evaluate
>>> accuracy = evaluate.load("accuracy")
```
次ã«ãäºæž¬ãšã©ãã«ã [`~evaluate.EvaluationModule.compute`] ã«æž¡ããŠç²ŸåºŠãèšç®ãã颿°ãäœæããŸãã
```py
>>> import numpy as np
>>> def compute_metrics(eval_pred):
... predictions, labels = eval_pred
... predictions = np.argmax(predictions, axis=1)
... return accuracy.compute(predictions=predictions, references=labels)
```
ããã§`compute_metrics`颿°ã®æºåãæŽããŸããããã¬ãŒãã³ã°ãã»ããã¢ãããããšãã«ãã®é¢æ°ã«æ»ããŸãã
## Train
<frameworkcontent>
<pt>
<Tip>
[`Trainer`] ã䜿çšããã¢ãã«ã®åŸ®èª¿æŽã«æ
£ããŠããªãå Žåã¯ã[ãã](../training#train-with-pytorch-trainer) ã®åºæ¬çãªãã¥ãŒããªã¢ã«ãã芧ãã ããã
</Tip>
ããã§ã¢ãã«ã®ãã¬ãŒãã³ã°ãéå§ããæºåãæŽããŸããã [`AutoModelForMultipleChoice`] ã䜿çšã㊠BERT ãããŒãããŸãã
```py
>>> from transformers import AutoModelForMultipleChoice, TrainingArguments, Trainer
>>> model = AutoModelForMultipleChoice.from_pretrained("bert-base-uncased")
```
ãã®æç¹ã§æ®ã£ãŠããæé ã¯æ¬¡ã® 3 ã€ã ãã§ãã
1. [`TrainingArguments`] ã§ãã¬ãŒãã³ã° ãã€ããŒãã©ã¡ãŒã¿ãå®çŸ©ããŸããå¯äžã®å¿
é ãã©ã¡ãŒã¿ã¯ãã¢ãã«ã®ä¿åå Žæãæå®ãã `output_dir` ã§ãã `push_to_hub=True`ãèšå®ããŠããã®ã¢ãã«ãããã«ããã·ã¥ããŸã (ã¢ãã«ãã¢ããããŒãããã«ã¯ãHugging Face ã«ãµã€ã³ã€ã³ããå¿
èŠããããŸã)ãåãšããã¯ã®çµäºæã«ã[`Trainer`] ã¯ç²ŸåºŠãè©äŸ¡ãããã¬ãŒãã³ã° ãã§ãã¯ãã€ã³ããä¿åããŸãã
2. ãã¬ãŒãã³ã°åŒæ°ããã¢ãã«ãããŒã¿ã»ãããããŒã¯ãã€ã¶ãŒãããŒã¿ç
§ååšãããã³ `compute_metrics` 颿°ãšãšãã« [`Trainer`] ã«æž¡ããŸãã
3. [`~Trainer.train`] ãåŒã³åºããŠã¢ãã«ã埮調æŽããŸãã
```py
>>> training_args = TrainingArguments(
... output_dir="my_awesome_swag_model",
... evaluation_strategy="epoch",
... save_strategy="epoch",
... load_best_model_at_end=True,
... learning_rate=5e-5,
... per_device_train_batch_size=16,
... per_device_eval_batch_size=16,
... num_train_epochs=3,
... weight_decay=0.01,
... push_to_hub=True,
... )
>>> trainer = Trainer(
... model=model,
... args=training_args,
... train_dataset=tokenized_swag["train"],
... eval_dataset=tokenized_swag["validation"],
... tokenizer=tokenizer,
... data_collator=DataCollatorForMultipleChoice(tokenizer=tokenizer),
... compute_metrics=compute_metrics,
... )
>>> trainer.train()
```
ãã¬ãŒãã³ã°ãå®äºãããã [`~transformers.Trainer.push_to_hub`] ã¡ãœããã䜿çšããŠã¢ãã«ãããã«å
±æãã誰ããã¢ãã«ã䜿çšã§ããŸãããã«ã
```py
>>> trainer.push_to_hub()
```
</pt>
<tf>
<Tip>
Keras ã䜿çšããã¢ãã«ã®åŸ®èª¿æŽã«æ
£ããŠããªãå Žåã¯ã[ãã¡ã](../training#train-a-tensorflow-model-with-keras) ã®åºæ¬çãªãã¥ãŒããªã¢ã«ãã芧ãã ããã
</Tip>
TensorFlow ã§ã¢ãã«ã埮調æŽããã«ã¯ããªããã£ãã€ã¶ãŒé¢æ°ãåŠç¿çã¹ã±ãžã¥ãŒã«ãããã³ããã€ãã®ãã¬ãŒãã³ã° ãã€ããŒãã©ã¡ãŒã¿ãŒãã»ããã¢ããããããšããå§ããŸãã
```py
>>> from transformers import create_optimizer
>>> batch_size = 16
>>> num_train_epochs = 2
>>> total_train_steps = (len(tokenized_swag["train"]) // batch_size) * num_train_epochs
>>> optimizer, schedule = create_optimizer(init_lr=5e-5, num_warmup_steps=0, num_train_steps=total_train_steps)
```
次ã«ã[`TFAutoModelForMultipleChoice`] ã䜿çšã㊠BERT ãããŒãã§ããŸãã
```py
>>> from transformers import TFAutoModelForMultipleChoice
>>> model = TFAutoModelForMultipleChoice.from_pretrained("bert-base-uncased")
```
[`~transformers.TFPreTrainedModel.prepare_tf_dataset`] ã䜿çšããŠãããŒã¿ã»ããã `tf.data.Dataset` 圢åŒã«å€æããŸãã
```py
>>> data_collator = DataCollatorForMultipleChoice(tokenizer=tokenizer)
>>> tf_train_set = model.prepare_tf_dataset(
... tokenized_swag["train"],
... shuffle=True,
... batch_size=batch_size,
... collate_fn=data_collator,
... )
>>> tf_validation_set = model.prepare_tf_dataset(
... tokenized_swag["validation"],
... shuffle=False,
... batch_size=batch_size,
... collate_fn=data_collator,
... )
```
[`compile`](https://keras.io/api/models/model_training_apis/#compile-method) ã䜿çšããŠãã¬ãŒãã³ã°çšã®ã¢ãã«ãèšå®ããŸãã Transformers ã¢ãã«ã«ã¯ãã¹ãŠããã©ã«ãã®ã¿ã¹ã¯é¢é£ã®æå€±é¢æ°ããããããæ¬¡ã®å Žåãé€ããæå€±é¢æ°ãæå®ããå¿
èŠã¯ãªãããšã«æ³šæããŠãã ããã
```py
>>> model.compile(optimizer=optimizer) # No loss argument!
```
ãã¬ãŒãã³ã°ãéå§ããåã«ã»ããã¢ããããæåŸã® 2 ã€ã®ããšã¯ãäºæž¬ãã粟床ãèšç®ããããšãšãã¢ãã«ãããã«ããã·ã¥ããæ¹æ³ãæäŸããããšã§ããã©ã¡ãã [Keras ã³ãŒã«ããã¯](../main_classes/keras_callbacks) ã䜿çšããŠè¡ãããŸãã
`compute_metrics` 颿°ã [`~transformers.KerasMetricCallback`] ã«æž¡ããŸãã
```py
>>> from transformers.keras_callbacks import KerasMetricCallback
>>> metric_callback = KerasMetricCallback(metric_fn=compute_metrics, eval_dataset=tf_validation_set)
```
[`~transformers.PushToHubCallback`] ã§ã¢ãã«ãšããŒã¯ãã€ã¶ãŒãããã·ã¥ããå Žæãæå®ããŸãã
```py
>>> from transformers.keras_callbacks import PushToHubCallback
>>> push_to_hub_callback = PushToHubCallback(
... output_dir="my_awesome_model",
... tokenizer=tokenizer,
... )
```
次ã«ãã³ãŒã«ããã¯ããŸãšããŠãã³ãã«ããŸãã
```py
>>> callbacks = [metric_callback, push_to_hub_callback]
```
ã€ãã«ãã¢ãã«ã®ãã¬ãŒãã³ã°ãéå§ããæºåãæŽããŸããããã¬ãŒãã³ã°ããã³æ€èšŒããŒã¿ã»ããããšããã¯æ°ãã³ãŒã«ããã¯ãæå®ã㊠[`fit`](https://keras.io/api/models/model_training_apis/#fit-method) ãåŒã³åºããã¢ãã«ã埮調æŽããŸãã
```py
>>> model.fit(x=tf_train_set, validation_data=tf_validation_set, epochs=2, callbacks=callbacks)
```
ãã¬ãŒãã³ã°ãå®äºãããšãã¢ãã«ã¯èªåçã«ããã«ã¢ããããŒãããã誰ã§ã䜿çšã§ããããã«ãªããŸãã
</tf>
</frameworkcontent>
<Tip>
è€æ°éžæçšã«ã¢ãã«ã埮調æŽããæ¹æ³ã®è©³çްãªäŸã«ã€ããŠã¯ã察å¿ããã»ã¯ã·ã§ã³ãåç
§ããŠãã ããã
[PyTorch ããŒãããã¯](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multiple_choice.ipynb)
ãŸã㯠[TensorFlow ããŒãããã¯](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multiple_choice-tf.ipynb)ã
</Tip>
# Inference
ã¢ãã«ã埮調æŽããã®ã§ããããæšè«ã«äœ¿çšã§ããããã«ãªããŸããã
ããã€ãã®ããã¹ããš 2 ã€ã®åçåè£ãèããŠãã ããã
```py
>>> prompt = "France has a bread law, Le Décret Pain, with strict rules on what is allowed in a traditional baguette."
>>> candidate1 = "The law does not apply to croissants and brioche."
>>> candidate2 = "The law applies to baguettes."
```
<frameworkcontent>
<pt>
åããã³ãããšåçåè£ã®ãã¢ãããŒã¯ã³åããPyTorch ãã³ãœã«ãè¿ããŸããããã€ãã®`lables`ãäœæããå¿
èŠããããŸãã
```py
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("my_awesome_swag_model")
>>> inputs = tokenizer([[prompt, candidate1], [prompt, candidate2]], return_tensors="pt", padding=True)
>>> labels = torch.tensor(0).unsqueeze(0)
```
å
¥åãšã©ãã«ãã¢ãã«ã«æž¡ãã`logits`ãè¿ããŸãã
```py
>>> from transformers import AutoModelForMultipleChoice
>>> model = AutoModelForMultipleChoice.from_pretrained("my_awesome_swag_model")
>>> outputs = model(**{k: v.unsqueeze(0) for k, v in inputs.items()}, labels=labels)
>>> logits = outputs.logits
```
æãé«ã確çã§ã¯ã©ã¹ãååŸããŸãã
```py
>>> predicted_class = logits.argmax().item()
>>> predicted_class
'0'
```
</pt>
<tf>
åããã³ãããšåçåè£ã®ãã¢ãããŒã¯ã³åããTensorFlow ãã³ãœã«ãè¿ããŸãã
```py
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("my_awesome_swag_model")
>>> inputs = tokenizer([[prompt, candidate1], [prompt, candidate2]], return_tensors="tf", padding=True)
```
å
¥åãã¢ãã«ã«æž¡ãã`logits`ãè¿ããŸãã
```py
>>> from transformers import TFAutoModelForMultipleChoice
>>> model = TFAutoModelForMultipleChoice.from_pretrained("my_awesome_swag_model")
>>> inputs = {k: tf.expand_dims(v, 0) for k, v in inputs.items()}
>>> outputs = model(inputs)
>>> logits = outputs.logits
```
æãé«ã確çã§ã¯ã©ã¹ãååŸããŸãã
```py
>>> predicted_class = int(tf.math.argmax(logits, axis=-1)[0])
>>> predicted_class
'0'
```
</tf>
</frameworkcontent>
| 0 |
hf_public_repos/transformers/docs/source/ja | hf_public_repos/transformers/docs/source/ja/tasks/question_answering.md | <!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Question answering
[[open-in-colab]]
<Youtube id="ajPx5LwJD-I"/>
質åå¿çã¿ã¹ã¯ã¯ã質åã«å¯ŸããŠåçãè¿ããŸãã AlexaãSiriãGoogle ãªã©ã®ä»®æ³ã¢ã·ã¹ã¿ã³ãã«å€©æ°ãå°ããããšããããªãã質åå¿çã¢ãã«ã䜿çšããããšãããã¯ãã§ãã質åå¿çã¿ã¹ã¯ã«ã¯äžè¬çã« 2 ã€ã®ã¿ã€ãããããŸãã
- æœåº: äžããããã³ã³ããã¹ãããåçãæœåºããŸãã
- æœè±¡ç: 質åã«æ£ããçããã³ã³ããã¹ãããåçãçæããŸãã
ãã®ã¬ã€ãã§ã¯ãæ¬¡ã®æ¹æ³ã説æããŸãã
1. æœåºç質åå¿ççšã« [SQuAD](https://huggingface.co/datasets/squad) ããŒã¿ã»ããäžã® [DistilBERT](https://huggingface.co/distilbert-base-uncased) ã埮調æŽããŸãã
2. 埮調æŽããã¢ãã«ãæšè«ã«äœ¿çšããŸãã
<Tip>
ãã®ãã¥ãŒããªã¢ã«ã§èª¬æããã¿ã¹ã¯ã¯ã次ã®ã¢ãã« ã¢ãŒããã¯ãã£ã§ãµããŒããããŠããŸãã
<!--This tip is automatically generated by `make fix-copies`, do not fill manually!-->
[ALBERT](../model_doc/albert), [BART](../model_doc/bart), [BERT](../model_doc/bert), [BigBird](../model_doc/big_bird), [BigBird-Pegasus](../model_doc/bigbird_pegasus), [BLOOM](../model_doc/bloom), [CamemBERT](../model_doc/camembert), [CANINE](../model_doc/canine), [ConvBERT](../model_doc/convbert), [Data2VecText](../model_doc/data2vec-text), [DeBERTa](../model_doc/deberta), [DeBERTa-v2](../model_doc/deberta-v2), [DistilBERT](../model_doc/distilbert), [ELECTRA](../model_doc/electra), [ERNIE](../model_doc/ernie), [ErnieM](../model_doc/ernie_m), [Falcon](../model_doc/falcon), [FlauBERT](../model_doc/flaubert), [FNet](../model_doc/fnet), [Funnel Transformer](../model_doc/funnel), [OpenAI GPT-2](../model_doc/gpt2), [GPT Neo](../model_doc/gpt_neo), [GPT NeoX](../model_doc/gpt_neox), [GPT-J](../model_doc/gptj), [I-BERT](../model_doc/ibert), [LayoutLMv2](../model_doc/layoutlmv2), [LayoutLMv3](../model_doc/layoutlmv3), [LED](../model_doc/led), [LiLT](../model_doc/lilt), [Longformer](../model_doc/longformer), [LUKE](../model_doc/luke), [LXMERT](../model_doc/lxmert), [MarkupLM](../model_doc/markuplm), [mBART](../model_doc/mbart), [MEGA](../model_doc/mega), [Megatron-BERT](../model_doc/megatron-bert), [MobileBERT](../model_doc/mobilebert), [MPNet](../model_doc/mpnet), [MPT](../model_doc/mpt), [MRA](../model_doc/mra), [MT5](../model_doc/mt5), [MVP](../model_doc/mvp), [Nezha](../model_doc/nezha), [Nyströmformer](../model_doc/nystromformer), [OPT](../model_doc/opt), [QDQBert](../model_doc/qdqbert), [Reformer](../model_doc/reformer), [RemBERT](../model_doc/rembert), [RoBERTa](../model_doc/roberta), [RoBERTa-PreLayerNorm](../model_doc/roberta-prelayernorm), [RoCBert](../model_doc/roc_bert), [RoFormer](../model_doc/roformer), [Splinter](../model_doc/splinter), [SqueezeBERT](../model_doc/squeezebert), [T5](../model_doc/t5), [UMT5](../model_doc/umt5), [XLM](../model_doc/xlm), [XLM-RoBERTa](../model_doc/xlm-roberta), [XLM-RoBERTa-XL](../model_doc/xlm-roberta-xl), [XLNet](../model_doc/xlnet), [X-MOD](../model_doc/xmod), [YOSO](../model_doc/yoso)
<!--End of the generated tip-->
</Tip>
å§ããåã«ãå¿
èŠãªã©ã€ãã©ãªããã¹ãŠã€ã³ã¹ããŒã«ãããŠããããšã確èªããŠãã ããã
```bash
pip install transformers datasets evaluate
```
ã¢ãã«ãã¢ããããŒãããŠã³ãã¥ããã£ãšå
±æã§ããããã«ãHugging Face ã¢ã«ãŠã³ãã«ãã°ã€ã³ããããšããå§ãããŸããããã³ããã衚瀺ãããããããŒã¯ã³ãå
¥åããŠãã°ã€ã³ããŸãã
```py
>>> from huggingface_hub import notebook_login
>>> notebook_login()
```
## Load SQuAD dataset
ãŸããð€ ããŒã¿ã»ãã ã©ã€ãã©ãªãã SQuAD ããŒã¿ã»ããã®å°ãããµãã»ãããèªã¿èŸŒã¿ãŸããããã«ãããå®å
šãªããŒã¿ã»ããã®ãã¬ãŒãã³ã°ã«ããã«æéãè²»ããåã«ãå®éšããŠãã¹ãŠãæ©èœããããšã確èªããæ©äŒãåŸãããŸãã
```py
>>> from datasets import load_dataset
>>> squad = load_dataset("squad", split="train[:5000]")
```
[`~datasets.Dataset.train_test_split`] ã¡ãœããã䜿çšããŠãããŒã¿ã»ããã® `train` åå²ããã¬ã€ã³ ã»ãããšãã¹ã ã»ããã«åå²ããŸãã
```py
>>> squad = squad.train_test_split(test_size=0.2)
```
次ã«ãäŸãèŠãŠã¿ãŸãããã
```py
>>> squad["train"][0]
{'answers': {'answer_start': [515], 'text': ['Saint Bernadette Soubirous']},
'context': 'Architecturally, the school has a Catholic character. Atop the Main Building\'s gold dome is a golden statue of the Virgin Mary. Immediately in front of the Main Building and facing it, is a copper statue of Christ with arms upraised with the legend "Venite Ad Me Omnes". Next to the Main Building is the Basilica of the Sacred Heart. Immediately behind the basilica is the Grotto, a Marian place of prayer and reflection. It is a replica of the grotto at Lourdes, France where the Virgin Mary reputedly appeared to Saint Bernadette Soubirous in 1858. At the end of the main drive (and in a direct line that connects through 3 statues and the Gold Dome), is a simple, modern stone statue of Mary.',
'id': '5733be284776f41900661182',
'question': 'To whom did the Virgin Mary allegedly appear in 1858 in Lourdes France?',
'title': 'University_of_Notre_Dame'
}
```
ããã«ã¯ããã€ãã®éèŠãªãã£ãŒã«ãããããŸãã
- `answers`: åçããŒã¯ã³ãšåçããã¹ãã®éå§äœçœ®ã
- `context`: ã¢ãã«ãçããæœåºããããã«å¿
èŠãªèæ¯æ
å ±ã
- `question`: ã¢ãã«ãçããå¿
èŠããã質åã
## Preprocess
<Youtube id="qgaM0weJHpA"/>
次ã®ã¹ãããã§ã¯ãDistilBERT ããŒã¯ãã€ã¶ãŒãããŒãããŠ`question`ãã£ãŒã«ããš`context`ãã£ãŒã«ããåŠçããŸãã
```py
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased")
```
質åå¿çã¿ã¹ã¯ã«ç¹æã®ã泚æãã¹ãååŠçæé ãããã€ããããŸãã
1. ããŒã¿ã»ããå
ã®äžéšã®äŸã«ã¯ãã¢ãã«ã®æå€§å
¥åé·ãè¶
ããéåžžã«é·ããã³ã³ããã¹ãããå«ãŸããå ŽåããããŸããããé·ãã·ãŒã±ã³ã¹ãåŠçããã«ã¯ã`truncation="only_second"` ãèšå®ã㊠`context` ã®ã¿ãåãæšãŠãŸãã
2. 次ã«ãèšå®ã«ãã£ãŠãåçã®éå§äœçœ®ãšçµäºäœçœ®ãå
ã® `context`ã«ãããã³ã°ããŸãã
ã`return_offset_mapping=True`ãã
3. ãããã³ã°ãæå
ã«ããã®ã§ãçãã®éå§ããŒã¯ã³ãšçµäºããŒã¯ã³ãèŠã€ããããšãã§ããŸãã [`~tokenizers.Encoding.sequence_ids`] ã¡ãœããã䜿çšããŠã
ãªãã»ããã®ã©ã®éšåã`question`ã«å¯Ÿå¿ããã©ã®éšåã`context`ã«å¯Ÿå¿ããããèŠã€ããŸãã
以äžã«ã`answer`ã®éå§ããŒã¯ã³ãšçµäºããŒã¯ã³ãåãè©°ããŠ`context`ã«ãããã³ã°ãã颿°ãäœæããæ¹æ³ã瀺ããŸãã
```py
>>> def preprocess_function(examples):
... questions = [q.strip() for q in examples["question"]]
... inputs = tokenizer(
... questions,
... examples["context"],
... max_length=384,
... truncation="only_second",
... return_offsets_mapping=True,
... padding="max_length",
... )
... offset_mapping = inputs.pop("offset_mapping")
... answers = examples["answers"]
... start_positions = []
... end_positions = []
... for i, offset in enumerate(offset_mapping):
... answer = answers[i]
... start_char = answer["answer_start"][0]
... end_char = answer["answer_start"][0] + len(answer["text"][0])
... sequence_ids = inputs.sequence_ids(i)
... # Find the start and end of the context
... idx = 0
... while sequence_ids[idx] != 1:
... idx += 1
... context_start = idx
... while sequence_ids[idx] == 1:
... idx += 1
... context_end = idx - 1
... # If the answer is not fully inside the context, label it (0, 0)
... if offset[context_start][0] > end_char or offset[context_end][1] < start_char:
... start_positions.append(0)
... end_positions.append(0)
... else:
... # Otherwise it's the start and end token positions
... idx = context_start
... while idx <= context_end and offset[idx][0] <= start_char:
... idx += 1
... start_positions.append(idx - 1)
... idx = context_end
... while idx >= context_start and offset[idx][1] >= end_char:
... idx -= 1
... end_positions.append(idx + 1)
... inputs["start_positions"] = start_positions
... inputs["end_positions"] = end_positions
... return inputs
```
ããŒã¿ã»ããå
šäœã«ååŠç颿°ãé©çšããã«ã¯ãð€ Datasets [`~datasets.Dataset.map`] 颿°ã䜿çšããŸãã `batched=True` ãèšå®ããŠããŒã¿ã»ããã®è€æ°ã®èŠçŽ ãäžåºŠã«åŠçããããšã§ã`map` 颿°ãé«éåã§ããŸããäžèŠãªåãåé€ããŸãã
```py
>>> tokenized_squad = squad.map(preprocess_function, batched=True, remove_columns=squad["train"].column_names)
```
次ã«ã[`DefaultDataCollatââor`] ã䜿çšããŠãµã³ãã«ã®ããããäœæããŸãã ð€ Transformers ã®ä»ã®ããŒã¿ç
§ååšãšã¯ç°ãªãã[`DefaultDataCollatââor`] ã¯ããã£ã³ã°ãªã©ã®è¿œå ã®ååŠçãé©çšããŸããã
<frameworkcontent>
<pt>
```py
>>> from transformers import DefaultDataCollator
>>> data_collator = DefaultDataCollator()
```
</pt>
<tf>
```py
>>> from transformers import DefaultDataCollator
>>> data_collator = DefaultDataCollator(return_tensors="tf")
```
</tf>
</frameworkcontent>
## Train
<frameworkcontent>
<pt>
<Tip>
[`Trainer`] ã䜿çšããã¢ãã«ã®åŸ®èª¿æŽã«æ
£ããŠããªãå Žåã¯ã[ãã](../training#train-with-pytorch-trainer) ã®åºæ¬çãªãã¥ãŒããªã¢ã«ãã芧ãã ããã
</Tip>
ããã§ã¢ãã«ã®ãã¬ãŒãã³ã°ãéå§ããæºåãæŽããŸããã [`AutoModelForQuestionAnswering`] ã䜿çšã㊠DitilBERT ãããŒãããŸãã
```py
>>> from transformers import AutoModelForQuestionAnswering, TrainingArguments, Trainer
>>> model = AutoModelForQuestionAnswering.from_pretrained("distilbert-base-uncased")
```
ãã®æç¹ã§æ®ã£ãŠããæé ã¯æ¬¡ã® 3 ã€ã ãã§ãã
1. [`TrainingArguments`] ã§ãã¬ãŒãã³ã° ãã€ããŒãã©ã¡ãŒã¿ãå®çŸ©ããŸããå¯äžã®å¿
é ãã©ã¡ãŒã¿ã¯ãã¢ãã«ã®ä¿åå Žæãæå®ãã `output_dir` ã§ãã `push_to_hub=True`ãèšå®ããŠããã®ã¢ãã«ãããã«ããã·ã¥ããŸã (ã¢ãã«ãã¢ããããŒãããã«ã¯ãHugging Face ã«ãµã€ã³ã€ã³ããå¿
èŠããããŸã)ã
2. ãã¬ãŒãã³ã°åŒæ°ãã¢ãã«ãããŒã¿ã»ãããããŒã¯ãã€ã¶ãŒãããŒã¿ç
§ååšãšãšãã« [`Trainer`] ã«æž¡ããŸãã
3. [`~Trainer.train`] ãåŒã³åºããŠã¢ãã«ã埮調æŽããŸãã
```py
>>> training_args = TrainingArguments(
... output_dir="my_awesome_qa_model",
... evaluation_strategy="epoch",
... learning_rate=2e-5,
... per_device_train_batch_size=16,
... per_device_eval_batch_size=16,
... num_train_epochs=3,
... weight_decay=0.01,
... push_to_hub=True,
... )
>>> trainer = Trainer(
... model=model,
... args=training_args,
... train_dataset=tokenized_squad["train"],
... eval_dataset=tokenized_squad["test"],
... tokenizer=tokenizer,
... data_collator=data_collator,
... )
>>> trainer.train()
```
ãã¬ãŒãã³ã°ãå®äºãããã [`~transformers.Trainer.push_to_hub`] ã¡ãœããã䜿çšããŠã¢ãã«ãããã«å
±æãã誰ããã¢ãã«ã䜿çšã§ããããã«ããŸãã
```py
>>> trainer.push_to_hub()
```
</pt>
<tf>
<Tip>
Keras ã䜿çšããã¢ãã«ã®åŸ®èª¿æŽã«æ
£ããŠããªãå Žåã¯ã[ãã¡ã](../training#train-a-tensorflow-model-with-keras) ã®åºæ¬çãªãã¥ãŒããªã¢ã«ãã芧ãã ããã
</Tip>
</ãã³ã>
TensorFlow ã§ã¢ãã«ã埮調æŽããã«ã¯ããªããã£ãã€ã¶ãŒé¢æ°ãåŠç¿çã¹ã±ãžã¥ãŒã«ãããã³ããã€ãã®ãã¬ãŒãã³ã° ãã€ããŒãã©ã¡ãŒã¿ãŒãã»ããã¢ããããããšããå§ããŸãã
```py
>>> from transformers import create_optimizer
>>> batch_size = 16
>>> num_epochs = 2
>>> total_train_steps = (len(tokenized_squad["train"]) // batch_size) * num_epochs
>>> optimizer, schedule = create_optimizer(
... init_lr=2e-5,
... num_warmup_steps=0,
... num_train_steps=total_train_steps,
... )
```
次ã«ã[`TFAutoModelForQuestionAnswering`] ã䜿çšã㊠DistilBERT ãããŒãã§ããŸãã
```py
>>> from transformers import TFAutoModelForQuestionAnswering
>>> model = TFAutoModelForQuestionAnswering("distilbert-base-uncased")
```
[`~transformers.TFPreTrainedModel.prepare_tf_dataset`] ã䜿çšããŠãããŒã¿ã»ããã `tf.data.Dataset` 圢åŒã«å€æããŸãã
```py
>>> tf_train_set = model.prepare_tf_dataset(
... tokenized_squad["train"],
... shuffle=True,
... batch_size=16,
... collate_fn=data_collator,
... )
>>> tf_validation_set = model.prepare_tf_dataset(
... tokenized_squad["test"],
... shuffle=False,
... batch_size=16,
... collate_fn=data_collator,
... )
```
[`compile`](https://keras.io/api/models/model_training_apis/#compile-method) ã䜿çšããŠãã¬ãŒãã³ã°çšã®ã¢ãã«ãèšå®ããŸãã
```py
>>> import tensorflow as tf
>>> model.compile(optimizer=optimizer)
```
ãã¬ãŒãã³ã°ãéå§ããåã«æåŸã«ã»ããã¢ããããããšã¯ãã¢ãã«ãããã«ããã·ã¥ããæ¹æ³ãæäŸããããšã§ããããã¯ãã¢ãã«ãšããŒã¯ãã€ã¶ãŒã [`~transformers.PushToHubCallback`] ã§ããã·ã¥ããå Žæãæå®ããããšã§å®è¡ã§ããŸãã
```py
>>> from transformers.keras_callbacks import PushToHubCallback
>>> callback = PushToHubCallback(
... output_dir="my_awesome_qa_model",
... tokenizer=tokenizer,
... )
```
ã€ãã«ãã¢ãã«ã®ãã¬ãŒãã³ã°ãéå§ããæºåãæŽããŸããããã¬ãŒãã³ã°ããã³æ€èšŒããŒã¿ã»ããããšããã¯æ°ãã³ãŒã«ããã¯ãæå®ã㊠[`fit`](https://keras.io/api/models/model_training_apis/#fit-method) ãåŒã³åºããã¢ãã«ã埮調æŽããŸãã
```py
>>> model.fit(x=tf_train_set, validation_data=tf_validation_set, epochs=3, callbacks=[callback])
```
ãã¬ãŒãã³ã°ãå®äºãããšãã¢ãã«ã¯èªåçã«ããã«ã¢ããããŒãããã誰ã§ã䜿çšã§ããããã«ãªããŸãã
</tf>
</frameworkcontent>
<Tip>
質åå¿ççšã®ã¢ãã«ã埮調æŽããæ¹æ³ã®è©³çްãªäŸã«ã€ããŠã¯ã察å¿ããããã¥ã¡ã³ããåç
§ããŠãã ããã
[PyTorch ããŒãããã¯](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering.ipynb)
ãŸã㯠[TensorFlow ããŒãããã¯](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering-tf.ipynb)ã
</Tip>
## Evaluate
質åå¿çã®è©äŸ¡ã«ã¯ã倧éã®åŸåŠçãå¿
èŠã§ããæéãããããããªãããã«ããã®ã¬ã€ãã§ã¯è©äŸ¡ã¹ããããçç¥ããŠããŸãã [`Trainer`] ã¯ãã¬ãŒãã³ã°äžã«è©äŸ¡æå€±ãèšç®ãããããã¢ãã«ã®ããã©ãŒãã³ã¹ã«ã€ããŠå®å
šã«åãããªãããã§ã¯ãããŸããã
ãã£ãšæéãããã質åå¿ççšã®ã¢ãã«ãè©äŸ¡ããæ¹æ³ã«èå³ãããå Žåã¯ã[質åå¿ç](https://huggingface.co/course/chapter7/7?fw=pt#postprocessing) ã®ç« ãåç
§ããŠãã ããã ð€ãã°ãã§ã€ã¹ã³ãŒã¹ããïŒ
## Inference
ã¢ãã«ã埮調æŽããã®ã§ããããæšè«ã«äœ¿çšã§ããããã«ãªããŸããã
質åãšãã¢ãã«ã«äºæž¬ããããã³ã³ããã¹ããèãåºããŸãã
```py
>>> question = "How many programming languages does BLOOM support?"
>>> context = "BLOOM has 176 billion parameters and can generate text in 46 languages natural languages and 13 programming languages."
```
æšè«çšã«åŸ®èª¿æŽãããã¢ãã«ã詊ãæãç°¡åãªæ¹æ³ã¯ãããã [`pipeline`] ã§äœ¿çšããããšã§ããã¢ãã«ã䜿çšããŠè³ªåå¿ççšã®`pipeline`ãã€ã³ã¹ã¿ã³ã¹åããããã«ããã¹ããæž¡ããŸãã
```py
>>> from transformers import pipeline
>>> question_answerer = pipeline("question-answering", model="my_awesome_qa_model")
>>> question_answerer(question=question, context=context)
{'score': 0.2058267742395401,
'start': 10,
'end': 95,
'answer': '176 billion parameters and can generate text in 46 languages natural languages and 13'}
```
å¿
èŠã«å¿ããŠã`pipeline`ã®çµæãæåã§è€è£œããããšãã§ããŸãã
<frameworkcontent>
<pt>
ããã¹ããããŒã¯ã³åã㊠PyTorch ãã³ãœã«ãè¿ããŸãã
```py
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("my_awesome_qa_model")
>>> inputs = tokenizer(question, context, return_tensors="pt")
```
å
¥åãã¢ãã«ã«æž¡ãã`logits`ãè¿ããŸãã
```py
>>> import torch
>>> from transformers import AutoModelForQuestionAnswering
>>> model = AutoModelForQuestionAnswering.from_pretrained("my_awesome_qa_model")
>>> with torch.no_grad():
... outputs = model(**inputs)
```
ã¢ãã«åºåããéå§äœçœ®ãšçµäºäœçœ®ã®æãé«ã確çãååŸããŸãã
```py
>>> answer_start_index = outputs.start_logits.argmax()
>>> answer_end_index = outputs.end_logits.argmax()
```
äºæž¬ãããããŒã¯ã³ããã³ãŒãããŠçããååŸããŸãã
```py
>>> predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
>>> tokenizer.decode(predict_answer_tokens)
'176 billion parameters and can generate text in 46 languages natural languages and 13'
```
</pt>
<tf>
ããã¹ããããŒã¯ã³åããTensorFlow ãã³ãœã«ãè¿ããŸãã
```py
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("my_awesome_qa_model")
>>> inputs = tokenizer(question, text, return_tensors="tf")
```
å
¥åãã¢ãã«ã«æž¡ãã`logits`ãè¿ããŸãã
```py
>>> from transformers import TFAutoModelForQuestionAnswering
>>> model = TFAutoModelForQuestionAnswering.from_pretrained("my_awesome_qa_model")
>>> outputs = model(**inputs)
```
ã¢ãã«åºåããéå§äœçœ®ãšçµäºäœçœ®ã®æãé«ã確çãååŸããŸãã
```py
>>> answer_start_index = int(tf.math.argmax(outputs.start_logits, axis=-1)[0])
>>> answer_end_index = int(tf.math.argmax(outputs.end_logits, axis=-1)[0])
```
äºæž¬ãããããŒã¯ã³ããã³ãŒãããŠçããååŸããŸãã
```py
>>> predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
>>> tokenizer.decode(predict_answer_tokens)
'176 billion parameters and can generate text in 46 languages natural languages and 13'
```
</tf>
</frameworkcontent>
| 0 |
hf_public_repos/transformers/docs/source/ja | hf_public_repos/transformers/docs/source/ja/tasks/masked_language_modeling.md | <!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Masked language modeling
[[open-in-colab]]
<Youtube id="mqElG5QJWUg"/>
ãã¹ã¯ãããèšèªã¢ããªã³ã°ã¯ã·ãŒã±ã³ã¹å
ã®ãã¹ã¯ãããããŒã¯ã³ãäºæž¬ããã¢ãã«ã¯ããŒã¯ã³ãåæ¹åã«åŠçã§ããŸãããã
ããã¯ãã¢ãã«ãå·Šå³ã®ããŒã¯ã³ã«å®å
šã«ã¢ã¯ã»ã¹ã§ããããšãæå³ããŸãããã¹ã¯ãããèšèªã¢ããªã³ã°ã¯ã次ã®ãããªã¿ã¹ã¯ã«æé©ã§ãã
ã·ãŒã±ã³ã¹å
šäœã®æèãããçè§£ããå¿
èŠããããŸãã BERT ã¯ãã¹ã¯ãããèšèªã¢ãã«ã®äžäŸã§ãã
ãã®ã¬ã€ãã§ã¯ãæ¬¡ã®æ¹æ³ã説æããŸãã
1. [ELI5](https://huggingface.co/distilroberta-base) ã® [r/askscience](https://www.reddit.com/r/askscience/) ãµãã»ããã§ [DistilRoBERTa](https://huggingface.co/distilroberta-base) ã埮調æŽããŸãã ://huggingface.co/datasets/eli5) ããŒã¿ã»ããã
2. 埮調æŽããã¢ãã«ãæšè«ã«äœ¿çšããŸãã
<Tip>
ãã®ã¬ã€ããšåãæé ã«åŸã£ãŠããã¹ã¯ãããèšèªã¢ããªã³ã°çšã«ä»ã®ã¢ãŒããã¯ãã£ã埮調æŽã§ããŸãã
次ã®ã¢ãŒããã¯ãã£ã®ãããããéžæããŸãã
<!--This tip is automatically generated by `make fix-copies`, do not fill manually!-->
[ALBERT](../model_doc/albert), [BART](../model_doc/bart), [BERT](../model_doc/bert), [BigBird](../model_doc/big_bird), [CamemBERT](../model_doc/camembert), [ConvBERT](../model_doc/convbert), [Data2VecText](../model_doc/data2vec-text), [DeBERTa](../model_doc/deberta), [DeBERTa-v2](../model_doc/deberta-v2), [DistilBERT](../model_doc/distilbert), [ELECTRA](../model_doc/electra), [ERNIE](../model_doc/ernie), [ESM](../model_doc/esm), [FlauBERT](../model_doc/flaubert), [FNet](../model_doc/fnet), [Funnel Transformer](../model_doc/funnel), [I-BERT](../model_doc/ibert), [LayoutLM](../model_doc/layoutlm), [Longformer](../model_doc/longformer), [LUKE](../model_doc/luke), [mBART](../model_doc/mbart), [MEGA](../model_doc/mega), [Megatron-BERT](../model_doc/megatron-bert), [MobileBERT](../model_doc/mobilebert), [MPNet](../model_doc/mpnet), [MRA](../model_doc/mra), [MVP](../model_doc/mvp), [Nezha](../model_doc/nezha), [Nyströmformer](../model_doc/nystromformer), [Perceiver](../model_doc/perceiver), [QDQBert](../model_doc/qdqbert), [Reformer](../model_doc/reformer), [RemBERT](../model_doc/rembert), [RoBERTa](../model_doc/roberta), [RoBERTa-PreLayerNorm](../model_doc/roberta-prelayernorm), [RoCBert](../model_doc/roc_bert), [RoFormer](../model_doc/roformer), [SqueezeBERT](../model_doc/squeezebert), [TAPAS](../model_doc/tapas), [Wav2Vec2](../model_doc/wav2vec2), [XLM](../model_doc/xlm), [XLM-RoBERTa](../model_doc/xlm-roberta), [XLM-RoBERTa-XL](../model_doc/xlm-roberta-xl), [X-MOD](../model_doc/xmod), [YOSO](../model_doc/yoso)
<!--End of the generated tip-->
</Tip>
å§ããåã«ãå¿
èŠãªã©ã€ãã©ãªããã¹ãŠã€ã³ã¹ããŒã«ãããŠããããšã確èªããŠãã ããã
```bash
pip install transformers datasets evaluate
```
ã¢ãã«ãã¢ããããŒãããŠã³ãã¥ããã£ãšå
±æã§ããããã«ãHugging Face ã¢ã«ãŠã³ãã«ãã°ã€ã³ããããšããå§ãããŸããããã³ããã衚瀺ãããããããŒã¯ã³ãå
¥åããŠãã°ã€ã³ããŸãã
```py
>>> from huggingface_hub import notebook_login
>>> notebook_login()
```
## Load ELI5 dataset
ãŸããELI5 ããŒã¿ã»ããã® r/askscience ãµãã»ããã®å°ãããµãã»ããã ð€ ããŒã¿ã»ãã ã©ã€ãã©ãªããããŒãããŸããããã§
ããŒã¿ã»ããå
šäœã®ãã¬ãŒãã³ã°ã«ããã«æéãè²»ããåã«ãå®éšããŠãã¹ãŠãæ©èœããããšã確èªããæ©äŒãäžããããŸãã
```py
>>> from datasets import load_dataset
>>> eli5 = load_dataset("eli5", split="train_asks[:5000]")
```
[`~datasets.Dataset.train_test_split`] ã¡ãœããã䜿çšããŠãããŒã¿ã»ããã® `train_asks` ããã¬ã€ã³ ã»ãããšãã¹ã ã»ããã«åå²ããŸãã
```py
>>> eli5 = eli5.train_test_split(test_size=0.2)
```
次ã«ãäŸãèŠãŠã¿ãŸãããã
```py
>>> eli5["train"][0]
{'answers': {'a_id': ['c3d1aib', 'c3d4lya'],
'score': [6, 3],
'text': ["The velocity needed to remain in orbit is equal to the square root of Newton's constant times the mass of earth divided by the distance from the center of the earth. I don't know the altitude of that specific mission, but they're usually around 300 km. That means he's going 7-8 km/s.\n\nIn space there are no other forces acting on either the shuttle or the guy, so they stay in the same position relative to each other. If he were to become unable to return to the ship, he would presumably run out of oxygen, or slowly fall into the atmosphere and burn up.",
"Hope you don't mind me asking another question, but why aren't there any stars visible in this photo?"]},
'answers_urls': {'url': []},
'document': '',
'q_id': 'nyxfp',
'selftext': '_URL_0_\n\nThis was on the front page earlier and I have a few questions about it. Is it possible to calculate how fast the astronaut would be orbiting the earth? Also how does he stay close to the shuttle so that he can return safely, i.e is he orbiting at the same speed and can therefore stay next to it? And finally if his propulsion system failed, would he eventually re-enter the atmosphere and presumably die?',
'selftext_urls': {'url': ['http://apod.nasa.gov/apod/image/1201/freeflyer_nasa_3000.jpg']},
'subreddit': 'askscience',
'title': 'Few questions about this space walk photograph.',
'title_urls': {'url': []}}
```
ããã¯å€ãã®ããšã®ããã«èŠãããããããŸããããå®éã«é¢å¿ãããã®ã¯`text`ãã£ãŒã«ãã ãã§ããèšèªã¢ããªã³ã° ã¿ã¹ã¯ã®åªããç¹ã¯ã次ã®åèªãã©ãã« * ã§ãããããã©ãã« (æåž«ãªãã¿ã¹ã¯ãšãåŒã°ããŸã) ãå¿
èŠãªãããšã§ãã
## Preprocess
<Youtube id="8PmhEIXhBvI"/>
ãã¹ã¯ãããèšèªã¢ããªã³ã°ã®å Žåãæ¬¡ã®ã¹ãããã¯ã`text`ãµããã£ãŒã«ããåŠçããããã« DistilRoBERTa ããŒã¯ãã€ã¶ãŒãããŒãããããšã§ãã
```py
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("distilroberta-base")
```
äžã®äŸãããããããã«ã`text`ãã£ãŒã«ãã¯å®éã«ã¯`answers`å
ã«ãã¹ããããŠããŸããããã¯ã次ã®ããšãè¡ãå¿
èŠãããããšãæå³ããŸã
[` flatten`](https://huggingface.co/docs/datasets/process.html#flatten) ã¡ãœããã䜿çšããŠããã¹ããããæ§é ãã `text` ãµããã£ãŒã«ããæœåºããŸãã
```py
>>> eli5 = eli5.flatten()
>>> eli5["train"][0]
{'answers.a_id': ['c3d1aib', 'c3d4lya'],
'answers.score': [6, 3],
'answers.text': ["The velocity needed to remain in orbit is equal to the square root of Newton's constant times the mass of earth divided by the distance from the center of the earth. I don't know the altitude of that specific mission, but they're usually around 300 km. That means he's going 7-8 km/s.\n\nIn space there are no other forces acting on either the shuttle or the guy, so they stay in the same position relative to each other. If he were to become unable to return to the ship, he would presumably run out of oxygen, or slowly fall into the atmosphere and burn up.",
"Hope you don't mind me asking another question, but why aren't there any stars visible in this photo?"],
'answers_urls.url': [],
'document': '',
'q_id': 'nyxfp',
'selftext': '_URL_0_\n\nThis was on the front page earlier and I have a few questions about it. Is it possible to calculate how fast the astronaut would be orbiting the earth? Also how does he stay close to the shuttle so that he can return safely, i.e is he orbiting at the same speed and can therefore stay next to it? And finally if his propulsion system failed, would he eventually re-enter the atmosphere and presumably die?',
'selftext_urls.url': ['http://apod.nasa.gov/apod/image/1201/freeflyer_nasa_3000.jpg'],
'subreddit': 'askscience',
'title': 'Few questions about this space walk photograph.',
'title_urls.url': []}
```
`answers`æ¥é èŸã§ç€ºãããããã«ãåãµããã£ãŒã«ãã¯åå¥ã®åã«ãªãã`text`ãã£ãŒã«ãã¯ãªã¹ãã«ãªããŸããããã®ä»£ãã
åæãåå¥ã«ããŒã¯ã³åããå Žåã¯ããªã¹ããæååã«å€æããŠããããããŸãšããŠããŒã¯ã³åã§ããããã«ããŸãã
以äžã¯ãåäŸã®æååã®ãªã¹ããçµåããçµæãããŒã¯ã³åããæåã®ååŠç颿°ã§ãã
```py
>>> def preprocess_function(examples):
... return tokenizer([" ".join(x) for x in examples["answers.text"]])
```
ãã®ååŠç颿°ãããŒã¿ã»ããå
šäœã«é©çšããã«ã¯ãð€ Datasets [`~datasets.Dataset.map`] ã¡ãœããã䜿çšããŸãã `map` 颿°ãé«éåããã«ã¯ã`batched=True` ãèšå®ããŠããŒã¿ã»ããã®è€æ°ã®èŠçŽ ãäžåºŠã«åŠçãã`num_proc` ã§ããã»ã¹ã®æ°ãå¢ãããŸããäžèŠãªåãåé€ããŸãã
```py
>>> tokenized_eli5 = eli5.map(
... preprocess_function,
... batched=True,
... num_proc=4,
... remove_columns=eli5["train"].column_names,
... )
```
ãã®ããŒã¿ã»ããã«ã¯ããŒã¯ã³ ã·ãŒã±ã³ã¹ãå«ãŸããŠããŸããããã®äžéšã¯ã¢ãã«ã®æå€§å
¥åé·ãããé·ããªããŸãã
2 çªç®ã®ååŠç颿°ã䜿çšããŠã
- ãã¹ãŠã®ã·ãŒã±ã³ã¹ãé£çµããŸã
- é£çµãããã·ãŒã±ã³ã¹ã`block_size`ã§å®çŸ©ãããçããã£ã³ã¯ã«åå²ããŸããããã¯ãæå€§å
¥åé·ããçããGPU RAM ã«ååãªé·ãã§ããå¿
èŠããããŸãã
```py
>>> block_size = 128
>>> def group_texts(examples):
... # Concatenate all texts.
... concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()}
... total_length = len(concatenated_examples[list(examples.keys())[0]])
... # We drop the small remainder, we could add padding if the model supported it instead of this drop, you can
... # customize this part to your needs.
... if total_length >= block_size:
... total_length = (total_length // block_size) * block_size
... # Split by chunks of block_size.
... result = {
... k: [t[i : i + block_size] for i in range(0, total_length, block_size)]
... for k, t in concatenated_examples.items()
... }
... return result
```
ããŒã¿ã»ããå
šäœã«`group_texts`颿°ãé©çšããŸãã
```py
>>> lm_dataset = tokenized_eli5.map(group_texts, batched=True, num_proc=4)
```
次ã«ã[`DataCollatââorForLanguageModeling`] ã䜿çšããŠãµã³ãã«ã®ããããäœæããŸããããŒã¿ã»ããå
šäœãæå€§é·ãŸã§ããã£ã³ã°ããã®ã§ã¯ãªããç
§åäžã«ãããå
ã®æé·ã®é·ããŸã§æã *åçã«ããã£ã³ã°* ããæ¹ãå¹ççã§ãã
<frameworkcontent>
<pt>
ã·ãŒã±ã³ã¹çµäºããŒã¯ã³ãããã£ã³ã° ããŒã¯ã³ãšããŠäœ¿çšããããŒã¿ãå埩ãããã³ã«ã©ã³ãã ã«ããŒã¯ã³ããã¹ã¯ããããã« `mlm_probability` ãæå®ããŸãã
```py
>>> from transformers import DataCollatorForLanguageModeling
>>> tokenizer.pad_token = tokenizer.eos_token
>>> data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm_probability=0.15)
```
</pt>
<tf>
ã·ãŒã±ã³ã¹çµäºããŒã¯ã³ãããã£ã³ã° ããŒã¯ã³ãšããŠäœ¿çšããããŒã¿ãå埩ãããã³ã«ã©ã³ãã ã«ããŒã¯ã³ããã¹ã¯ããããã« `mlm_probability` ãæå®ããŸãã
```py
>>> from transformers import DataCollatorForLanguageModeling
>>> data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm_probability=0.15, return_tensors="tf")
```
</tf>
</frameworkcontent>
## Train
<frameworkcontent>
<pt>
<Tip>
[`Trainer`] ã䜿çšããã¢ãã«ã®åŸ®èª¿æŽã«æ
£ããŠããªãå Žåã¯ã[ãã](../training#train-with-pytorch-trainer) ã®åºæ¬çãªãã¥ãŒããªã¢ã«ãã芧ãã ããã
</Tip>
ããã§ã¢ãã«ã®ãã¬ãŒãã³ã°ãéå§ããæºåãæŽããŸããã [`AutoModelForMaskedLM`] ã䜿çšã㊠DistilRoBERTa ãããŒãããŸãã
```py
>>> from transformers import AutoModelForMaskedLM
>>> model = AutoModelForMaskedLM.from_pretrained("distilroberta-base")
```
ãã®æç¹ã§æ®ã£ãŠããæé ã¯æ¬¡ã® 3 ã€ã ãã§ãã
1. [`TrainingArguments`] ã§ãã¬ãŒãã³ã° ãã€ããŒãã©ã¡ãŒã¿ãå®çŸ©ããŸããå¯äžã®å¿
é ãã©ã¡ãŒã¿ã¯ãã¢ãã«ã®ä¿åå Žæãæå®ãã `output_dir` ã§ãã `push_to_hub=True`ãèšå®ããŠããã®ã¢ãã«ãããã«ããã·ã¥ããŸã (ã¢ãã«ãã¢ããããŒãããã«ã¯ãHugging Face ã«ãµã€ã³ã€ã³ããå¿
èŠããããŸã)ã
2. ãã¬ãŒãã³ã°åŒæ°ãã¢ãã«ãããŒã¿ã»ãããããŒã¿ç
§ååšãšãšãã« [`Trainer`] ã«æž¡ããŸãã
3. [`~Trainer.train`] ãåŒã³åºããŠã¢ãã«ã埮調æŽããŸãã
```py
>>> training_args = TrainingArguments(
... output_dir="my_awesome_eli5_mlm_model",
... evaluation_strategy="epoch",
... learning_rate=2e-5,
... num_train_epochs=3,
... weight_decay=0.01,
... push_to_hub=True,
... )
>>> trainer = Trainer(
... model=model,
... args=training_args,
... train_dataset=lm_dataset["train"],
... eval_dataset=lm_dataset["test"],
... data_collator=data_collator,
... )
>>> trainer.train()
```
ãã¬ãŒãã³ã°ãå®äºãããã [`~transformers.Trainer.evaluate`] ã¡ãœããã䜿çšããŠã¢ãã«ãè©äŸ¡ãããã®è€éããååŸããŸãã
```py
>>> import math
>>> eval_results = trainer.evaluate()
>>> print(f"Perplexity: {math.exp(eval_results['eval_loss']):.2f}")
Perplexity: 8.76
```
次ã«ã [`~transformers.Trainer.push_to_hub`] ã¡ãœããã䜿çšããŠã¢ãã«ãããã«å
±æãã誰ããã¢ãã«ã䜿çšã§ããããã«ããŸãã
```py
>>> trainer.push_to_hub()
```
</pt>
<tf>
<Tip>
Keras ã䜿çšããã¢ãã«ã®åŸ®èª¿æŽã«æ
£ããŠããªãå Žåã¯ã[ãã¡ã](../training#train-a-tensorflow-model-with-keras) ã®åºæ¬çãªãã¥ãŒããªã¢ã«ãã芧ãã ããã
</Tip>
TensorFlow ã§ã¢ãã«ã埮調æŽããã«ã¯ããªããã£ãã€ã¶ãŒé¢æ°ãåŠç¿çã¹ã±ãžã¥ãŒã«ãããã³ããã€ãã®ãã¬ãŒãã³ã° ãã€ããŒãã©ã¡ãŒã¿ãŒãã»ããã¢ããããããšããå§ããŸãã
```py
>>> from transformers import create_optimizer, AdamWeightDecay
>>> optimizer = AdamWeightDecay(learning_rate=2e-5, weight_decay_rate=0.01)
```
次ã«ã[`TFAutoModelForMaskedLM`] ã䜿çšã㊠DistilRoBERTa ãããŒãã§ããŸãã
```py
>>> from transformers import TFAutoModelForMaskedLM
>>> model = TFAutoModelForMaskedLM.from_pretrained("distilroberta-base")
```
[`~transformers.TFPreTrainedModel.prepare_tf_dataset`] ã䜿çšããŠãããŒã¿ã»ããã `tf.data.Dataset` 圢åŒã«å€æããŸãã
```py
>>> tf_train_set = model.prepare_tf_dataset(
... lm_dataset["train"],
... shuffle=True,
... batch_size=16,
... collate_fn=data_collator,
... )
>>> tf_test_set = model.prepare_tf_dataset(
... lm_dataset["test"],
... shuffle=False,
... batch_size=16,
... collate_fn=data_collator,
... )
```
[`compile`](https://keras.io/api/models/model_training_apis/#compile-method) ã䜿çšããŠãã¬ãŒãã³ã°çšã®ã¢ãã«ãèšå®ããŸãã Transformers ã¢ãã«ã«ã¯ãã¹ãŠããã©ã«ãã®ã¿ã¹ã¯é¢é£ã®æå€±é¢æ°ããããããæ¬¡ã®å Žåãé€ããæå€±é¢æ°ãæå®ããå¿
èŠã¯ãªãããšã«æ³šæããŠãã ããã
```py
>>> import tensorflow as tf
>>> model.compile(optimizer=optimizer) # No loss argument!
```
This can be done by specifying where to push your model and tokenizer in the [`~transformers.PushToHubCallback`]:
```py
>>> from transformers.keras_callbacks import PushToHubCallback
>>> callback = PushToHubCallback(
... output_dir="my_awesome_eli5_mlm_model",
... tokenizer=tokenizer,
... )
```
ã€ãã«ãã¢ãã«ã®ãã¬ãŒãã³ã°ãéå§ããæºåãæŽããŸããããã¬ãŒãã³ã°ããã³æ€èšŒããŒã¿ã»ããããšããã¯æ°ãã³ãŒã«ããã¯ãæå®ã㊠[`fit`](https://keras.io/api/models/model_training_apis/#fit-method) ãåŒã³åºããã¢ãã«ã埮調æŽããŸãã
```py
>>> model.fit(x=tf_train_set, validation_data=tf_test_set, epochs=3, callbacks=[callback])
```
ãã¬ãŒãã³ã°ãå®äºãããšãã¢ãã«ã¯èªåçã«ããã«ã¢ããããŒãããã誰ã§ã䜿çšã§ããããã«ãªããŸãã
</tf>
</frameworkcontent>
<Tip>
ãã¹ã¯ãããèšèªã¢ããªã³ã°çšã«ã¢ãã«ã埮調æŽããæ¹æ³ã®ãã詳现ãªäŸã«ã€ããŠã¯ã察å¿ããããã¥ã¡ã³ããåç
§ããŠãã ããã
[PyTorch ããŒãããã¯](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb)
ãŸã㯠[TensorFlow ããŒãããã¯](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb)ã
</Tip>
## Inference
ã¢ãã«ã埮調æŽããã®ã§ããããæšè«ã«äœ¿çšã§ããããã«ãªããŸããã
ã¢ãã«ã«ç©ºçœãåããããã¹ããèãåºããç¹å¥ãª `<mask>` ããŒã¯ã³ã䜿çšããŠç©ºçœã瀺ããŸãã
```py
>>> text = "The Milky Way is a <mask> galaxy."
```
æšè«çšã«åŸ®èª¿æŽãããã¢ãã«ã詊ãæãç°¡åãªæ¹æ³ã¯ãããã [`pipeline`] ã§äœ¿çšããããšã§ããã¢ãã«ã䜿çšããŠãã£ã«ãã¹ã¯ã®`pipeline`ãã€ã³ã¹ã¿ã³ã¹åããããã¹ããããã«æž¡ããŸããå¿
èŠã«å¿ããŠã`top_k`ãã©ã¡ãŒã¿ã䜿çšããŠãè¿ãäºæž¬ã®æ°ãæå®ã§ããŸãã
```py
>>> from transformers import pipeline
>>> mask_filler = pipeline("fill-mask", "stevhliu/my_awesome_eli5_mlm_model")
>>> mask_filler(text, top_k=3)
[{'score': 0.5150994658470154,
'token': 21300,
'token_str': ' spiral',
'sequence': 'The Milky Way is a spiral galaxy.'},
{'score': 0.07087188959121704,
'token': 2232,
'token_str': ' massive',
'sequence': 'The Milky Way is a massive galaxy.'},
{'score': 0.06434620916843414,
'token': 650,
'token_str': ' small',
'sequence': 'The Milky Way is a small galaxy.'}]
```
<frameworkcontent>
<pt>
ããã¹ããããŒã¯ã³åãã`input_ids`ã PyTorch ãã³ãœã«ãšããŠè¿ããŸãã `<mask>` ããŒã¯ã³ã®äœçœ®ãæå®ããå¿
èŠããããŸãã
```py
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("stevhliu/my_awesome_eli5_mlm_model")
>>> inputs = tokenizer(text, return_tensors="pt")
>>> mask_token_index = torch.where(inputs["input_ids"] == tokenizer.mask_token_id)[1]
```
å
¥åãã¢ãã«ã«æž¡ãããã¹ã¯ãããããŒã¯ã³ã®`logits`ãè¿ããŸãã
```py
>>> from transformers import AutoModelForMaskedLM
>>> model = AutoModelForMaskedLM.from_pretrained("stevhliu/my_awesome_eli5_mlm_model")
>>> logits = model(**inputs).logits
>>> mask_token_logits = logits[0, mask_token_index, :]
```
次ã«ããã¹ã¯ããã 3 ã€ã®ããŒã¯ã³ãæãé«ã確çã§è¿ããåºåããŸãã
```py
>>> top_3_tokens = torch.topk(mask_token_logits, 3, dim=1).indices[0].tolist()
>>> for token in top_3_tokens:
... print(text.replace(tokenizer.mask_token, tokenizer.decode([token])))
The Milky Way is a spiral galaxy.
The Milky Way is a massive galaxy.
The Milky Way is a small galaxy.
```
</pt>
<tf>
ããã¹ããããŒã¯ã³åãã`input_ids`ã TensorFlow ãã³ãœã«ãšããŠè¿ããŸãã `<mask>` ããŒã¯ã³ã®äœçœ®ãæå®ããå¿
èŠããããŸãã
```py
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("stevhliu/my_awesome_eli5_mlm_model")
>>> inputs = tokenizer(text, return_tensors="tf")
>>> mask_token_index = tf.where(inputs["input_ids"] == tokenizer.mask_token_id)[0, 1]
```
å
¥åãã¢ãã«ã«æž¡ãããã¹ã¯ãããããŒã¯ã³ã®`logits`ãè¿ããŸãã
```py
>>> from transformers import TFAutoModelForMaskedLM
>>> model = TFAutoModelForMaskedLM.from_pretrained("stevhliu/my_awesome_eli5_mlm_model")
>>> logits = model(**inputs).logits
>>> mask_token_logits = logits[0, mask_token_index, :]
```
次ã«ããã¹ã¯ããã 3 ã€ã®ããŒã¯ã³ãæãé«ã確çã§è¿ããåºåããŸãã
```py
>>> top_3_tokens = tf.math.top_k(mask_token_logits, 3).indices.numpy()
>>> for token in top_3_tokens:
... print(text.replace(tokenizer.mask_token, tokenizer.decode([token])))
The Milky Way is a spiral galaxy.
The Milky Way is a massive galaxy.
The Milky Way is a small galaxy.
```
</tf>
</frameworkcontent>
| 0 |
hf_public_repos/transformers/docs/source/ja | hf_public_repos/transformers/docs/source/ja/tasks/image_to_image.md | <!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Image-to-Image Task Guide
[[open-in-colab]]
Image-to-Image ã¿ã¹ã¯ã¯ãã¢ããªã±ãŒã·ã§ã³ãç»åãåä¿¡ããå¥ã®ç»åãåºåããã¿ã¹ã¯ã§ããããã«ã¯ãç»å匷å (è¶
è§£å床ãäœå
é匷åããã£ã¬ã€ã³ãªã©)ãç»å修埩ãªã©ãå«ãããŸããŸãªãµãã¿ã¹ã¯ããããŸãã
ãã®ã¬ã€ãã§ã¯ãæ¬¡ã®æ¹æ³ã説æããŸãã
- è¶
è§£å床ã¿ã¹ã¯ã«ç»åéã®ãã€ãã©ã€ã³ã䜿çšããŸãã
- ãã€ãã©ã€ã³ã䜿çšããã«ãåãã¿ã¹ã¯ã«å¯ŸããŠã€ã¡ãŒãžéã¢ãã«ãå®è¡ããŸãã
ãã®ã¬ã€ãããªãªãŒã¹ãããæç¹ã§ã¯ã`image-to-image`ãã€ãã©ã€ã³ã¯è¶
è§£å床ã¿ã¹ã¯ã®ã¿ããµããŒãããŠããããšã«æ³šæããŠãã ããã
å¿
èŠãªã©ã€ãã©ãªãã€ã³ã¹ããŒã«ããããšããå§ããŸãããã
```bash
pip install transformers
```
[Swin2SR ã¢ãã«](https://huggingface.co/caidas/swin2SR-lightweight-x2-64) ã䜿çšããŠãã€ãã©ã€ã³ãåæåã§ããããã«ãªããŸãããæ¬¡ã«ãã€ã¡ãŒãžã䜿çšããŠãã€ãã©ã€ã³ãåŒã³åºãããšã§ããã€ãã©ã€ã³ãæšè«ã§ããŸããçŸæç¹ã§ã¯ã[Swin2SR ã¢ãã«](https://huggingface.co/models?sort=trending&search=swin2sr) ã®ã¿ããã®ãã€ãã©ã€ã³ã§ãµããŒããããŠããŸãã
```python
from transformers import pipeline
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
pipe = pipeline(task="image-to-image", model="caidas/swin2SR-lightweight-x2-64", device=device)
```
ã§ã¯ãç»åãèªã¿èŸŒã¿ãŸãããã
```python
from PIL import Image
import requests
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/cat.jpg"
image = Image.open(requests.get(url, stream=True).raw)
print(image.size)
```
```bash
# (532, 432)
```
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/cat.jpg" alt="Photo of a cat"/>
</div>
ããã§ããã€ãã©ã€ã³ã䜿çšããŠæšè«ãå®è¡ã§ããããã«ãªããŸãããç«ã®ç»åã®æ¡å€§ããŒãžã§ã³ãååŸããŸãã
```python
upscaled = pipe(image)
print(upscaled.size)
```
```bash
# (1072, 880)
```
ãã€ãã©ã€ã³ã䜿çšããã«èªåã§æšè«ãå®è¡ãããå Žåã¯ããã©ã³ã¹ãã©ãŒããŒã® `Swin2SRForImageSuperResolution` ã¯ã©ã¹ãš `Swin2SRImageProcessor` ã¯ã©ã¹ã䜿çšã§ããŸããããã«ã¯åãã¢ãã«ã®ãã§ãã¯ãã€ã³ãã䜿çšããŸããã¢ãã«ãšããã»ããµãåæåããŸãããã
```python
from transformers import Swin2SRForImageSuperResolution, Swin2SRImageProcessor
model = Swin2SRForImageSuperResolution.from_pretrained("caidas/swin2SR-lightweight-x2-64").to(device)
processor = Swin2SRImageProcessor("caidas/swin2SR-lightweight-x2-64")
```
`pipeline`ãã¯ãèªåã§è¡ãå¿
èŠãããååŠçãšåŸåŠçã®ã¹ããããæœè±¡åããã®ã§ãç»åãååŠçããŸããããç»åãããã»ããµã«æž¡ããŠããããã¯ã»ã«å€ã GPU ã«ç§»åããŸãã
```python
pixel_values = processor(image, return_tensors="pt").pixel_values
print(pixel_values.shape)
pixel_values = pixel_values.to(device)
```
ããã§ããã¯ã»ã«å€ãã¢ãã«ã«æž¡ãããšã§ç»åãæšæž¬ã§ããããã«ãªããŸããã
```python
import torch
with torch.no_grad():
outputs = model(pixel_values)
```
åºåã¯ã以äžã®ãã㪠`ImageSuperResolutionOutput` ã¿ã€ãã®ãªããžã§ã¯ãã§ã ð
```
(loss=None, reconstruction=tensor([[[[0.8270, 0.8269, 0.8275, ..., 0.7463, 0.7446, 0.7453],
[0.8287, 0.8278, 0.8283, ..., 0.7451, 0.7448, 0.7457],
[0.8280, 0.8273, 0.8269, ..., 0.7447, 0.7446, 0.7452],
...,
[0.5923, 0.5933, 0.5924, ..., 0.0697, 0.0695, 0.0706],
[0.5926, 0.5932, 0.5926, ..., 0.0673, 0.0687, 0.0705],
[0.5927, 0.5914, 0.5922, ..., 0.0664, 0.0694, 0.0718]]]],
device='cuda:0'), hidden_states=None, attentions=None)
```
`reconstruction`ãååŸãããããèŠèŠåããããã«åŸåŠçããå¿
èŠããããŸããã©ã®ããã«èŠãããèŠãŠã¿ãŸãããã
```python
outputs.reconstruction.data.shape
# torch.Size([1, 3, 880, 1072])
```
åºåãå§çž®ããŠè»ž 0 ãåé€ããå€ãã¯ãªããããŠãããããã numpy float ã«å€æããå¿
èŠããããŸããæ¬¡ã«ã軞ã [1072, 880] ã®åœ¢ç¶ã«ãªãããã«é
眮ããæåŸã«åºåãç¯å² [0, 255] ã«æ»ããŸãã
```python
import numpy as np
# squeeze, take to CPU and clip the values
output = outputs.reconstruction.data.squeeze().cpu().clamp_(0, 1).numpy()
# rearrange the axes
output = np.moveaxis(output, source=0, destination=-1)
# bring values back to pixel values range
output = (output * 255.0).round().astype(np.uint8)
Image.fromarray(output)
```
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/cat_upscaled.png" alt="Upscaled photo of a cat"/>
</div> | 0 |
hf_public_repos/transformers/docs/source/ja | hf_public_repos/transformers/docs/source/ja/tasks/video_classification.md | <!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Video classification
[[open-in-colab]]
ãããªåé¡ã¯ããããªå
šäœã«ã©ãã«ãŸãã¯ã¯ã©ã¹ãå²ãåœãŠãã¿ã¹ã¯ã§ãããããªã«ã¯ãåãããªã« 1 ã€ã®ã¯ã©ã¹ã®ã¿ãå«ãŸããããšãæåŸ
ãããŸãããããªåé¡ã¢ãã«ã¯ãããªãå
¥åãšããŠåãåãããããªãã©ã®ã¯ã©ã¹ã«å±ãããã«ã€ããŠã®äºæž¬ãè¿ããŸãããããã®ã¢ãã«ã䜿çšããŠããããªã®å
容ãåé¡ã§ããŸãããããªåé¡ã®å®éã®ã¢ããªã±ãŒã·ã§ã³ã¯ã¢ã¯ã·ã§ã³/ã¢ã¯ãã£ããã£èªèã§ããããã£ãããã¹ ã¢ããªã±ãŒã·ã§ã³ã«åœ¹ç«ã¡ãŸãããŸããèŠèŠé害ã®ãã人ã«ãšã£ãŠãç¹ã«é倿ã«åœ¹ç«ã¡ãŸãã
ãã®ã¬ã€ãã§ã¯ãæ¬¡ã®æ¹æ³ã説æããŸãã
1. [UCF101](https://www.crcv.ucf.edu/) ã®ãµãã»ããã§ [VideoMAE](https://huggingface.co/docs/transformers/main/en/model_doc/videomae) ã埮調æŽããŸãã data/UCF101.php) ããŒã¿ã»ããã
2. 埮調æŽããã¢ãã«ãæšè«ã«äœ¿çšããŸãã
<Tip>
ãã®ãã¥ãŒããªã¢ã«ã§èª¬æããã¿ã¹ã¯ã¯ã次ã®ã¢ãã« ã¢ãŒããã¯ãã£ã§ãµããŒããããŠããŸãã
<!--This tip is automatically generated by `make fix-copies`, do not fill manually!-->
[TimeSformer](../model_doc/timesformer), [VideoMAE](../model_doc/videomae), [ViViT](../model_doc/vivit)
<!--End of the generated tip-->
</Tip>
å§ããåã«ãå¿
èŠãªã©ã€ãã©ãªããã¹ãŠã€ã³ã¹ããŒã«ãããŠããããšã確èªããŠãã ããã
```bash
pip install -q pytorchvideo transformers evaluate
```
[PyTorchVideo](https://pytorchvideo.org/) (`pytorchvideo` ãšåŒã°ããŸã) ã䜿çšããŠãããªãåŠçããæºåããŸãã
ã¢ãã«ãã¢ããããŒãããŠã³ãã¥ããã£ãšå
±æã§ããããã«ãHugging Face ã¢ã«ãŠã³ãã«ãã°ã€ã³ããããšããå§ãããŸããããã³ããã衚瀺ãããããããŒã¯ã³ãå
¥åããŠãã°ã€ã³ããŸãã
```py
>>> from huggingface_hub import notebook_login
>>> notebook_login()
```
## Load UCF101 dataset
ãŸãã[UCF-101 ããŒã¿ã»ãã](https://www.crcv.ucf.edu/data/UCF101.php) ã®ãµãã»ãããããŒãããŸããããã«ãããå®å
šãªããŒã¿ã»ããã®ãã¬ãŒãã³ã°ã«ããã«æéãè²»ããåã«ãå®éšããŠãã¹ãŠãæ©èœããããšã確èªããæ©äŒãåŸãããŸãã
```py
>>> from huggingface_hub import hf_hub_download
>>> hf_dataset_identifier = "sayakpaul/ucf101-subset"
>>> filename = "UCF101_subset.tar.gz"
>>> file_path = hf_hub_download(repo_id=hf_dataset_identifier, filename=filename, repo_type="dataset")
```
ãµãã»ãããããŠã³ããŒãããåŸãå§çž®ã¢ãŒã«ã€ããæœåºããå¿
èŠããããŸãã
```py
>>> import tarfile
>>> with tarfile.open(file_path) as t:
... t.extractall(".")
```
倧ãŸãã«èšããšãããŒã¿ã»ããã¯æ¬¡ã®ããã«æ§æãããŠããŸãã
```bash
UCF101_subset/
train/
BandMarching/
video_1.mp4
video_2.mp4
...
Archery
video_1.mp4
video_2.mp4
...
...
val/
BandMarching/
video_1.mp4
video_2.mp4
...
Archery
video_1.mp4
video_2.mp4
...
...
test/
BandMarching/
video_1.mp4
video_2.mp4
...
Archery
video_1.mp4
video_2.mp4
...
...
```
(`sorted`)ããã ãã㪠ãã¹ã¯æ¬¡ã®ããã«è¡šç€ºãããŸãã
```bash
...
'UCF101_subset/train/ApplyEyeMakeup/v_ApplyEyeMakeup_g07_c04.avi',
'UCF101_subset/train/ApplyEyeMakeup/v_ApplyEyeMakeup_g07_c06.avi',
'UCF101_subset/train/ApplyEyeMakeup/v_ApplyEyeMakeup_g08_c01.avi',
'UCF101_subset/train/ApplyEyeMakeup/v_ApplyEyeMakeup_g09_c02.avi',
'UCF101_subset/train/ApplyEyeMakeup/v_ApplyEyeMakeup_g09_c06.avi'
...
```
åãã°ã«ãŒã/ã·ãŒã³ã«å±ãããã㪠ã¯ãªãããããããã㪠ãã¡ã€ã« ãã¹ã§ã¯ã°ã«ãŒãã`g`ã§ç€ºãããŠããããšãããããŸããããšãã°ã`v_ApplyEyeMakeup_g07_c04.avi`ã`v_ApplyEyeMakeup_g07_c06.avi`ãªã©ã§ãã
æ€èšŒãšè©äŸ¡ã®åå²ã§ã¯ã[ããŒã¿æŒæŽ©](https://www.kaggle.com/code/alexisbcook/data-leakage) ãé²ãããã«ãåãã°ã«ãŒã/ã·ãŒã³ããã®ãã㪠ã¯ãªããã䜿çšããªãã§ãã ããããã®ãã¥ãŒããªã¢ã«ã§äœ¿çšããŠãããµãã»ããã§ã¯ããã®æ
å ±ãèæ
®ãããŠããŸãã
次ã«ãããŒã¿ã»ããå
ã«ååšããã©ãã«ã®ã»ãããååŸããŸãããŸããã¢ãã«ãåæåãããšãã«åœ¹ç«ã€ 2 ã€ã®èŸæžãäœæããŸãã
* `label2id`: ã¯ã©ã¹åãæŽæ°ã«ãããããŸãã
* `id2label`: æŽæ°ãã¯ã©ã¹åã«ãããã³ã°ããŸãã
```py
>>> class_labels = sorted({str(path).split("/")[2] for path in all_video_file_paths})
>>> label2id = {label: i for i, label in enumerate(class_labels)}
>>> id2label = {i: label for label, i in label2id.items()}
>>> print(f"Unique classes: {list(label2id.keys())}.")
# Unique classes: ['ApplyEyeMakeup', 'ApplyLipstick', 'Archery', 'BabyCrawling', 'BalanceBeam', 'BandMarching', 'BaseballPitch', 'Basketball', 'BasketballDunk', 'BenchPress'].
```
åæ§çãªã¯ã©ã¹ã10çš®é¡ãããŸãããã¬ãŒãã³ã° ã»ããã«ã¯ãã¯ã©ã¹ããšã« 30 åã®ãããªããããŸãã
## Load a model to fine-tune
äºåãã¬ãŒãã³ã°ããããã§ãã¯ãã€ã³ããšããã«é¢é£ããç»åããã»ããµãããããªåé¡ã¢ãã«ãã€ã³ã¹ã¿ã³ã¹åããŸããã¢ãã«ã®ãšã³ã³ãŒããŒã«ã¯äºåãã¬ãŒãã³ã°ããããã©ã¡ãŒã¿ãŒãä»å±ããŠãããåé¡ãããã¯ã©ã³ãã ã«åæåãããŸããç»åããã»ããµã¯ãããŒã¿ã»ããã®ååŠçãã€ãã©ã€ã³ãäœæãããšãã«åœ¹ç«ã¡ãŸãã
```py
>>> from transformers import VideoMAEImageProcessor, VideoMAEForVideoClassification
>>> model_ckpt = "MCG-NJU/videomae-base"
>>> image_processor = VideoMAEImageProcessor.from_pretrained(model_ckpt)
>>> model = VideoMAEForVideoClassification.from_pretrained(
... model_ckpt,
... label2id=label2id,
... id2label=id2label,
... ignore_mismatched_sizes=True, # provide this in case you're planning to fine-tune an already fine-tuned checkpoint
... )
```
ã¢ãã«ã®ããŒãäžã«ã次ã®èŠåã衚瀺ãããå ŽåããããŸãã
```bash
Some weights of the model checkpoint at MCG-NJU/videomae-base were not used when initializing VideoMAEForVideoClassification: [..., 'decoder.decoder_layers.1.attention.output.dense.bias', 'decoder.decoder_layers.2.attention.attention.key.weight']
- This IS expected if you are initializing VideoMAEForVideoClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing VideoMAEForVideoClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of VideoMAEForVideoClassification were not initialized from the model checkpoint at MCG-NJU/videomae-base and are newly initialized: ['classifier.bias', 'classifier.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
```
ãã®èŠåã¯ãäžéšã®éã¿ (ããšãã°ã`classifier`å±€ã®éã¿ãšãã€ã¢ã¹) ãç Žæ£ããä»ã®ããã€ãã®éã¿ (æ°ãã`classifier`å±€ã®éã¿ãšãã€ã¢ã¹) ãã©ã³ãã ã«åæåããŠããããšã瀺ããŠããŸãããã®å Žåãããã¯äºæ³ãããããšã§ããäºåã«ãã¬ãŒãã³ã°ãããéã¿ãæããªãæ°ããé éšã远å ããŠãããããæšè«ã«äœ¿çšããåã«ãã®ã¢ãã«ã埮調æŽããå¿
èŠããããšã©ã€ãã©ãªãèŠåããŸããããã¯ãŸãã«ç§ãã¡ãè¡ãããšããŠãããã®ã§ããããã
**泚æ** [ãã®ãã§ãã¯ãã€ã³ã](https://huggingface.co/MCG-NJU/videomae-base-finetuned-kinetics) ã¯ãåæ§ã®ããŠã³ã¹ããªãŒã ã§åŸ®èª¿æŽãããŠãã§ãã¯ãã€ã³ããååŸãããããããã®ã¿ã¹ã¯ã®ããã©ãŒãã³ã¹ãåäžããããšã«æ³šæããŠãã ãããããªãã®ãã¡ã€ã³ã®éè€ãããã¿ã¹ã¯ã `MCG-NJU/videomae-base-finetuned-kinetics` ã埮調æŽããŠååŸãã [ãã®ãã§ãã¯ãã€ã³ã](https://huggingface.co/sayakpaul/videomae-base-finetuned-kinetics-finetuned-ucf101-subset) ã確èªã§ããŸãã -ãããã£ã¯ã¹`ã
## Prepare the datasets for training
ãããªã®ååŠçã«ã¯ã[PyTorchVideo ã©ã€ãã©ãª](https://pytorchvideo.org/) ãå©çšããŸãããŸããå¿
èŠãªäŸåé¢ä¿ãã€ã³ããŒãããŸãã
```py
>>> import pytorchvideo.data
>>> from pytorchvideo.transforms import (
... ApplyTransformToKey,
... Normalize,
... RandomShortSideScale,
... RemoveKey,
... ShortSideScale,
... UniformTemporalSubsample,
... )
>>> from torchvision.transforms import (
... Compose,
... Lambda,
... RandomCrop,
... RandomHorizontalFlip,
... Resize,
... )
```
ãã¬ãŒãã³ã° ããŒã¿ã»ããã®å€æã«ã¯ãåäžãªæéãµããµã³ããªã³ã°ããã¯ã»ã«æ£èŠåãã©ã³ãã ã¯ãããã³ã°ãããã³ã©ã³ãã ãªæ°Žå¹³å転ãçµã¿åãããŠäœ¿çšââããŸããæ€èšŒããã³è©äŸ¡ã®ããŒã¿ã»ãã倿ã§ã¯ãã©ã³ãã ãªããªãã³ã°ãšæ°Žå¹³å転ãé€ããåã倿ãã§ãŒã³ãç¶æããŸãããããã®å€æã®è©³çްã«ã€ããŠã¯ã[PyTorchVideo ã®å
¬åŒããã¥ã¡ã³ã](https://pytorchvideo.org) ãåç
§ããŠãã ããã
äºåãã¬ãŒãã³ã°ãããã¢ãã«ã«é¢é£ä»ãããã`image_processor`ã䜿çšããŠãæ¬¡ã®æ
å ±ãååŸããŸãã
* ãã㪠ãã¬ãŒã ã®ãã¯ã»ã«ãæ£èŠåãããç»åã®å¹³åå€ãšæšæºåå·®ã
* ãã㪠ãã¬ãŒã ã®ãµã€ãºã倿Žããã空éè§£å床ã
ãŸããããã€ãã®å®æ°ãå®çŸ©ããŸãã
```py
>>> mean = image_processor.image_mean
>>> std = image_processor.image_std
>>> if "shortest_edge" in image_processor.size:
... height = width = image_processor.size["shortest_edge"]
>>> else:
... height = image_processor.size["height"]
... width = image_processor.size["width"]
>>> resize_to = (height, width)
>>> num_frames_to_sample = model.config.num_frames
>>> sample_rate = 4
>>> fps = 30
>>> clip_duration = num_frames_to_sample * sample_rate / fps
```
次ã«ãããŒã¿ã»ããåºæã®å€æãšããŒã¿ã»ãããããããå®çŸ©ããŸãããã¬ãŒãã³ã°ã»ããããå§ããŸã:
```py
>>> train_transform = Compose(
... [
... ApplyTransformToKey(
... key="video",
... transform=Compose(
... [
... UniformTemporalSubsample(num_frames_to_sample),
... Lambda(lambda x: x / 255.0),
... Normalize(mean, std),
... RandomShortSideScale(min_size=256, max_size=320),
... RandomCrop(resize_to),
... RandomHorizontalFlip(p=0.5),
... ]
... ),
... ),
... ]
... )
>>> train_dataset = pytorchvideo.data.Ucf101(
... data_path=os.path.join(dataset_root_path, "train"),
... clip_sampler=pytorchvideo.data.make_clip_sampler("random", clip_duration),
... decode_audio=False,
... transform=train_transform,
... )
```
åãäžé£ã®ã¯ãŒã¯ãããŒãæ€èšŒã»ãããšè©äŸ¡ã»ããã«é©çšã§ããŸãã
```py
>>> val_transform = Compose(
... [
... ApplyTransformToKey(
... key="video",
... transform=Compose(
... [
... UniformTemporalSubsample(num_frames_to_sample),
... Lambda(lambda x: x / 255.0),
... Normalize(mean, std),
... Resize(resize_to),
... ]
... ),
... ),
... ]
... )
>>> val_dataset = pytorchvideo.data.Ucf101(
... data_path=os.path.join(dataset_root_path, "val"),
... clip_sampler=pytorchvideo.data.make_clip_sampler("uniform", clip_duration),
... decode_audio=False,
... transform=val_transform,
... )
>>> test_dataset = pytorchvideo.data.Ucf101(
... data_path=os.path.join(dataset_root_path, "test"),
... clip_sampler=pytorchvideo.data.make_clip_sampler("uniform", clip_duration),
... decode_audio=False,
... transform=val_transform,
... )
```
**泚æ**: äžèšã®ããŒã¿ã»ãã ãã€ãã©ã€ã³ã¯ã[å
¬åŒ PyTorchVideo ãµã³ãã«](https://pytorchvideo.org/docs/tutorial_classification#dataset) ããååŸãããã®ã§ãã [`pytorchvideo.data.Ucf101()`](https://pytorchvideo.readthedocs.io/en/latest/api/data/data.html#pytorchvideo.data.Ucf101) 颿°ã䜿çšããŠããŸãã UCF-101 ããŒã¿ã»ãããå
éšã§ã¯ã[`pytorchvideo.data.labeled_video_dataset.LabeledVideoDataset`](https://pytorchvideo.readthedocs.io/en/latest/api/data/data.html#pytorchvideo.data.LabeledVideoDataset) ãªããžã§ã¯ããè¿ããŸãã `LabeledVideoDataset` ã¯ã©ã¹ã¯ãPyTorchVideo ããŒã¿ã»ããå
ã®ãã¹ãŠã®ãããªã®åºæ¬ã¯ã©ã¹ã§ãããããã£ãŠãPyTorchVideo ã§æ¢è£œã§ãµããŒããããŠããªãã«ã¹ã¿ã ããŒã¿ã»ããã䜿çšãããå Žåã¯ãããã«å¿ã㊠`LabeledVideoDataset` ã¯ã©ã¹ãæ¡åŒµã§ããŸãã詳现ã«ã€ããŠã¯ã`data`API [ããã¥ã¡ã³ã](https://pytorchvideo.readthedocs.io/en/latest/api/data/data.html)ãåç
§ããŠãã ããããŸããããŒã¿ã»ãããåæ§ã®æ§é (äžã«ç€ºãããã®) ã«åŸã£ãŠããå Žåã¯ã`pytorchvideo.data.Ucf101()` ã䜿çšãããšåé¡ãªãåäœããã¯ãã§ãã
`num_videos` åŒæ°ã«ã¢ã¯ã»ã¹ãããšãããŒã¿ã»ããå
ã®ãããªã®æ°ãç¥ãããšãã§ããŸãã
```py
>>> print(train_dataset.num_videos, val_dataset.num_videos, test_dataset.num_videos)
# (300, 30, 75)
```
## Visualize the preprocessed video for better debugging
```py
>>> import imageio
>>> import numpy as np
>>> from IPython.display import Image
>>> def unnormalize_img(img):
... """Un-normalizes the image pixels."""
... img = (img * std) + mean
... img = (img * 255).astype("uint8")
... return img.clip(0, 255)
>>> def create_gif(video_tensor, filename="sample.gif"):
... """Prepares a GIF from a video tensor.
...
... The video tensor is expected to have the following shape:
... (num_frames, num_channels, height, width).
... """
... frames = []
... for video_frame in video_tensor:
... frame_unnormalized = unnormalize_img(video_frame.permute(1, 2, 0).numpy())
... frames.append(frame_unnormalized)
... kargs = {"duration": 0.25}
... imageio.mimsave(filename, frames, "GIF", **kargs)
... return filename
>>> def display_gif(video_tensor, gif_name="sample.gif"):
... """Prepares and displays a GIF from a video tensor."""
... video_tensor = video_tensor.permute(1, 0, 2, 3)
... gif_filename = create_gif(video_tensor, gif_name)
... return Image(filename=gif_filename)
>>> sample_video = next(iter(train_dataset))
>>> video_tensor = sample_video["video"]
>>> display_gif(video_tensor)
```
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/sample_gif.gif" alt="Person playing basketball"/>
</div>
## Train the model
ð€ Transformers ã® [`Trainer`](https://huggingface.co/docs/transformers/main_classes/trainer) ãã¢ãã«ã®ãã¬ãŒãã³ã°ã«å©çšããŸãã `Trainer`ãã€ã³ã¹ã¿ã³ã¹åããã«ã¯ããã¬ãŒãã³ã°æ§æãšè©äŸ¡ã¡ããªã¯ã¹ãå®çŸ©ããå¿
èŠããããŸããæãéèŠãªã®ã¯ [`TrainingArguments`](https://huggingface.co/transformers/main_classes/trainer.html#transformers.TrainingArguments) ã§ãããã¯ãã¬ãŒãã³ã°ãæ§æããããã®ãã¹ãŠã®å±æ§ãå«ãã¯ã©ã¹ã§ããã¢ãã«ã®ãã§ãã¯ãã€ã³ããä¿åããããã«äœ¿çšãããåºåãã©ã«ããŒåãå¿
èŠã§ãããŸããð€ Hub äžã®ã¢ãã« ãªããžããªå
ã®ãã¹ãŠã®æ
å ±ãåæããã®ã«ã圹ç«ã¡ãŸãã
ãã¬ãŒãã³ã°åŒæ°ã®ã»ãšãã©ã¯äžç®çç¶ã§ãããããã§éåžžã«éèŠãªã®ã¯`remove_unused_columns=False`ã§ããããã«ãããã¢ãã«ã®åŒã³åºã颿°ã§äœ¿çšãããªãæ©èœãåé€ãããŸããããã©ã«ãã§ã¯`True`ã§ããããã¯ãéåžžãæªäœ¿çšã®ç¹åŸŽåãåé€ããã¢ãã«ã®åŒã³åºã颿°ãžã®å
¥åãè§£åããããããããšãçæ³çã§ããããã§ãããã ãããã®å Žåã`pixel_values` (ã¢ãã«ãå
¥åã§æåŸ
ããå¿
é ããŒã§ã) ãäœæããã«ã¯ãæªäœ¿çšã®æ©èœ (ç¹ã«`video`) ãå¿
èŠã§ãã
```py
>>> from transformers import TrainingArguments, Trainer
>>> model_name = model_ckpt.split("/")[-1]
>>> new_model_name = f"{model_name}-finetuned-ucf101-subset"
>>> num_epochs = 4
>>> args = TrainingArguments(
... new_model_name,
... remove_unused_columns=False,
... evaluation_strategy="epoch",
... save_strategy="epoch",
... learning_rate=5e-5,
... per_device_train_batch_size=batch_size,
... per_device_eval_batch_size=batch_size,
... warmup_ratio=0.1,
... logging_steps=10,
... load_best_model_at_end=True,
... metric_for_best_model="accuracy",
... push_to_hub=True,
... max_steps=(train_dataset.num_videos // batch_size) * num_epochs,
... )
```
`pytorchvideo.data.Ucf101()` ã«ãã£ãŠè¿ãããããŒã¿ã»ãã㯠`__len__` ã¡ãœãããå®è£
ããŠããŸããããã®ããã`TrainingArguments`ãã€ã³ã¹ã¿ã³ã¹åãããšãã«`max_steps`ãå®çŸ©ããå¿
èŠããããŸãã
次ã«ãäºæž¬ããã¡ããªã¯ã¹ãèšç®ãã颿°ãå®çŸ©ããå¿
èŠããããŸããããã¯ãããããããŒããã`metric`ã䜿çšããŸããå¿
èŠãªååŠçã¯ãäºæž¬ãããããžããã® argmax ãååŸããããšã ãã§ãã
```py
import evaluate
metric = evaluate.load("accuracy")
def compute_metrics(eval_pred):
predictions = np.argmax(eval_pred.predictions, axis=1)
return metric.compute(predictions=predictions, references=eval_pred.label_ids)
```
**è©äŸ¡ã«é¢ããæ³šæäºé
**:
[VideoMAE è«æ](https://arxiv.org/abs/2203.12602) ã§ã¯ãèè
ã¯æ¬¡ã®è©äŸ¡æŠç¥ã䜿çšããŠããŸãã圌ãã¯ãã¹ã ãããªããã®ããã€ãã®ã¯ãªããã§ã¢ãã«ãè©äŸ¡ãããããã®ã¯ãªããã«ããŸããŸãªã¯ããããé©çšããŠãåèšã¹ã³ã¢ãå ±åããŸãããã ããåçŽããšç°¡æœããä¿ã€ããã«ããã®ãã¥ãŒããªã¢ã«ã§ã¯ãããèæ
®ããŸããã
ãŸãããµã³ãã«ããŸãšããŠãããåŠçããããã«äœ¿çšããã `collatââe_fn` ãå®çŸ©ããŸããåãããã¯ã`pixel_values` ãš `labels` ãšãã 2 ã€ã®ããŒã§æ§æãããŸãã
```py
>>> def collate_fn(examples):
... # permute to (num_frames, num_channels, height, width)
... pixel_values = torch.stack(
... [example["video"].permute(1, 0, 2, 3) for example in examples]
... )
... labels = torch.tensor([example["label"] for example in examples])
... return {"pixel_values": pixel_values, "labels": labels}
```
次ã«ãããããã¹ãŠãããŒã¿ã»ãããšãšãã«`Trainer`ã«æž¡ãã ãã§ãã
```py
>>> trainer = Trainer(
... model,
... args,
... train_dataset=train_dataset,
... eval_dataset=val_dataset,
... tokenizer=image_processor,
... compute_metrics=compute_metrics,
... data_collator=collate_fn,
... )
```
ãã§ã«ããŒã¿ãååŠçããŠããã®ã«ããªãããŒã¯ãã€ã¶ãŒãšããŠ`image_processor`ãæž¡ããã®ãäžæè°ã«æããããããŸãããããã¯ãã€ã¡ãŒãž ããã»ããµæ§æãã¡ã€ã« (JSON ãšããŠä¿å) ãããäžã®ãªããžããªã«ã¢ããããŒããããããã«ããããã ãã§ãã
次ã«ã`train` ã¡ãœãããåŒã³åºããŠã¢ãã«ã埮調æŽããŸãã
```py
>>> train_results = trainer.train()
```
ãã¬ãŒãã³ã°ãå®äºãããã [`~transformers.Trainer.push_to_hub`] ã¡ãœããã䜿çšããŠã¢ãã«ãããã«å
±æãã誰ããã¢ãã«ã䜿çšã§ããããã«ããŸãã
```py
>>> trainer.push_to_hub()
```
## Inference
ã¢ãã«ã埮調æŽããã®ã§ããããæšè«ã«äœ¿çšã§ããããã«ãªããŸããã
æšè«ã®ããã«ãããªãããŒãããŸãã
```py
>>> sample_test_video = next(iter(test_dataset))
```
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/sample_gif_two.gif" alt="Teams playing basketball"/>
</div>
æšè«çšã«åŸ®èª¿æŽãããã¢ãã«ã詊ãæãç°¡åãªæ¹æ³ã¯ãããã [`pipeline`](https://huggingface.co/docs/transformers/main/en/main_classes/pipelines#transformers.VideoClassificationPipeline). ã§äœ¿çšããããšã§ããã¢ãã«ã䜿çšããŠãããªåé¡çšã®` pipeline`ãã€ã³ã¹ã¿ã³ã¹åããããã«ãããªãæž¡ããŸãã
```py
>>> from transformers import pipeline
>>> video_cls = pipeline(model="my_awesome_video_cls_model")
>>> video_cls("https://huggingface.co/datasets/sayakpaul/ucf101-subset/resolve/main/v_BasketballDunk_g14_c06.avi")
[{'score': 0.9272987842559814, 'label': 'BasketballDunk'},
{'score': 0.017777055501937866, 'label': 'BabyCrawling'},
{'score': 0.01663011871278286, 'label': 'BalanceBeam'},
{'score': 0.009560945443809032, 'label': 'BandMarching'},
{'score': 0.0068979403004050255, 'label': 'BaseballPitch'}]
```
å¿
èŠã«å¿ããŠã`pipeline`ã®çµæãæåã§è€è£œããããšãã§ããŸãã
```py
>>> def run_inference(model, video):
... # (num_frames, num_channels, height, width)
... perumuted_sample_test_video = video.permute(1, 0, 2, 3)
... inputs = {
... "pixel_values": perumuted_sample_test_video.unsqueeze(0),
... "labels": torch.tensor(
... [sample_test_video["label"]]
... ), # this can be skipped if you don't have labels available.
... }
... device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
... inputs = {k: v.to(device) for k, v in inputs.items()}
... model = model.to(device)
... # forward pass
... with torch.no_grad():
... outputs = model(**inputs)
... logits = outputs.logits
... return logits
```
次ã«ãå
¥åãã¢ãã«ã«æž¡ãã`logits `ãè¿ããŸãã
```
>>> logits = run_inference(trained_model, sample_test_video["video"])
```
`logits` ããã³ãŒããããšã次ã®ããã«ãªããŸãã
```py
>>> predicted_class_idx = logits.argmax(-1).item()
>>> print("Predicted class:", model.config.id2label[predicted_class_idx])
# Predicted class: BasketballDunk
```
| 0 |
hf_public_repos/transformers/docs/source/ja | hf_public_repos/transformers/docs/source/ja/tasks/text-to-speech.md | <!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Text to speech
[[open-in-colab]]
ããã¹ãèªã¿äžã (TTS) ã¯ãããã¹ãããèªç¶ãªé³å£°ãäœæããã¿ã¹ã¯ã§ããé³å£°ã¯è€æ°ã®åœ¢åŒã§çæã§ããŸãã
èšèªãšè€æ°ã®è©±è
åããçŸåšãããã€ãã®ããã¹ãèªã¿äžãã¢ãã«ã ð€ Transformers ã§å©çšå¯èœã§ãã
[Bark](../model_doc/bark)ã[MMS](../model_doc/mms)ã[VITS](../model_doc/vits)ãããã³ [SpeechT5](../model_doc/speecht5)ã
`text-to-audio`ãã€ãã©ã€ã³ (ãŸãã¯ãã®å¥å - `text-to-speech`) ã䜿çšããŠãé³å£°ãç°¡åã«çæã§ããŸãã Bark ãªã©ã®äžéšã®ã¢ãã«ã¯ã
ç¬ããããæ¯ãæ³£ããªã©ã®éèšèªã³ãã¥ãã±ãŒã·ã§ã³ãçæãããã鳿¥œã远å ãããããããã«æ¡ä»¶ä»ãããããšãã§ããŸãã
Bark ã§`text-to-speech`ãã€ãã©ã€ã³ã䜿çšããæ¹æ³ã®äŸã次ã«ç€ºããŸãã
```py
>>> from transformers import pipeline
>>> pipe = pipeline("text-to-speech", model="suno/bark-small")
>>> text = "[clears throat] This is a test ... and I just took a long pause."
>>> output = pipe(text)
```
ããŒãããã¯ã§çµæã®é³å£°ãèãããã«äœ¿çšã§ããã³ãŒã ã¹ãããããæ¬¡ã«ç€ºããŸãã
```python
>>> from IPython.display import Audio
>>> Audio(output["audio"], rate=output["sampling_rate"])
```
Bark ããã³ãã®ä»ã®äºåãã¬ãŒãã³ã°ããã TTS ã¢ãã«ãã§ããããšã®è©³çްãªäŸã«ã€ããŠã¯ã次ã®ããã¥ã¡ã³ããåç
§ããŠãã ããã
[é³å£°ã³ãŒã¹](https://huggingface.co/learn/audio-course/chapter6/pre-trained_models)ã
TTS ã¢ãã«ã埮調æŽããå ŽåãçŸåšåŸ®èª¿æŽã§ããã®ã¯ SpeechT5 ã®ã¿ã§ãã SpeechT5 ã¯ã次ã®çµã¿åããã§äºåãã¬ãŒãã³ã°ãããŠããŸãã
é³å£°ããããã¹ããžã®ããŒã¿ãšããã¹ãããé³å£°ãžã®ããŒã¿ãäž¡æ¹ã®ããã¹ãã«å
±æãããé ããã衚çŸã®çµ±äžããã空éãåŠç¿ã§ããããã«ããŸãã
ãããŠã¹ããŒããããã¯ãåãäºåãã¬ãŒãã³ã°æžã¿ã¢ãã«ãããŸããŸãªã¿ã¹ã¯ã«åãããŠåŸ®èª¿æŽã§ããããšãæå³ããŸããããã«ãSpeechT5
X ãã¯ãã« ã¹ããŒã«ãŒã®åã蟌ã¿ãéããŠè€æ°ã®ã¹ããŒã«ãŒããµããŒãããŸãã
ãã®ã¬ã€ãã®æ®ãã®éšåã§ã¯ãæ¬¡ã®æ¹æ³ã説æããŸãã
1. [VoxPopuli](https://huggingface.co/datasets/facebook/voxpopuli) ã®ãªã©ã³ãèª (`nl`) èšèªãµãã»ããäžã®è±èªé³å£°ã§å
ã
ãã¬ãŒãã³ã°ããã [SpeechT5](../model_doc/speecht5) ã埮調æŽããŸãã ããŒã¿ã»ããã
2. ãã€ãã©ã€ã³ã䜿çšãããçŽæ¥äœ¿çšãããã® 2 ã€ã®æ¹æ³ã®ããããã§ãæŽç·Žãããã¢ãã«ãæšè«ã«äœ¿çšããŸãã
å§ããåã«ãå¿
èŠãªã©ã€ãã©ãªããã¹ãŠã€ã³ã¹ããŒã«ãããŠããããšã確èªããŠãã ããã
```bash
pip install datasets soundfile speechbrain accelerate
```
SpeechT5 ã®ãã¹ãŠã®æ©èœããŸã æ£åŒãªãªãŒã¹ã«ããŒãžãããŠããªãããããœãŒã¹ãã ð€Transformers ãã€ã³ã¹ããŒã«ããŸãã
```bash
pip install git+https://github.com/huggingface/transformers.git
```
<Tip>
ãã®ã¬ã€ãã«åŸãã«ã¯ãGPU ãå¿
èŠã§ããããŒãããã¯ã§äœæ¥ããŠããå Žåã¯ã次ã®è¡ãå®è¡ã㊠GPU ãå©çšå¯èœãã©ããã確èªããŸãã
```bash
!nvidia-smi
```
</Tip>
Hugging Face ã¢ã«ãŠã³ãã«ãã°ã€ã³ããŠãã¢ãã«ãã¢ããããŒãããŠã³ãã¥ããã£ãšå
±æããããšããå§ãããŸããããã³ããã衚瀺ãããããããŒã¯ã³ãå
¥åããŠãã°ã€ã³ããŸãã
```py
>>> from huggingface_hub import notebook_login
>>> notebook_login()
```
## Load the dataset
[VoxPopuli](https://huggingface.co/datasets/facebook/voxpopuli) ã¯ã以äžã§æ§æãããå€§èŠæš¡ãªå€èšèªé³å£°ã³ãŒãã¹ã§ãã
ããŒã¿ã¯ 2009 幎ãã 2020 å¹Žã®æ¬§å·è°äŒã®ã€ãã³ãèšé²ããœãŒã¹ãšããŠããŸãã 15 ä»¶åã®ã©ãã«ä»ãé³å£°æåèµ·ããããŒã¿ãå«ãŸããŠããŸãã
ãšãŒãããã®èšèªããã®ã¬ã€ãã§ã¯ãªã©ã³ãèªã®ãµãã»ããã䜿çšããŠããŸãããèªç±ã«å¥ã®ãµãã»ãããéžæããŠãã ããã
VoxPopuli ãŸãã¯ãã®ä»ã®èªåé³å£°èªè (ASR) ããŒã¿ã»ããã¯æé©ã§ã¯ãªãå¯èœæ§ãããããšã«æ³šæããŠãã ããã
TTS ã¢ãã«ããã¬ãŒãã³ã°ããããã®ãªãã·ã§ã³ãéå°ãªããã¯ã°ã©ãŠã³ããã€ãºãªã©ãASR ã«ãšã£ãŠæçãšãªãæ©èœã¯æ¬¡ã®ãšããã§ãã
éåžžãTTS ã§ã¯æãŸãããããŸããããã ããæé«å質ãå€èšèªããã«ãã¹ããŒã«ãŒã® TTS ããŒã¿ã»ãããèŠã€ããã®ã¯éåžžã«å°é£ãªå ŽåããããŸãã
ææŠçã
ããŒã¿ãããŒãããŸããã:
```py
>>> from datasets import load_dataset, Audio
>>> dataset = load_dataset("facebook/voxpopuli", "nl", split="train")
>>> len(dataset)
20968
```
埮調æŽã«ã¯ 20968 åã®äŸã§ååã§ãã SpeechT5 ã¯ãªãŒãã£ãª ããŒã¿ã®ãµã³ããªã³ã° ã¬ãŒãã 16 kHz ã§ããããšãæ³å®ããŠããããã
ããŒã¿ã»ããå
ã®äŸããã®èŠä»¶ãæºãããŠããããšã確èªããŠãã ããã
```py
dataset = dataset.cast_column("audio", Audio(sampling_rate=16000))
```
## Preprocess the data
䜿çšããã¢ãã« ãã§ãã¯ãã€ã³ããå®çŸ©ããé©åãªããã»ããµãããŒãããããšããå§ããŸãããã
```py
>>> from transformers import SpeechT5Processor
>>> checkpoint = "microsoft/speecht5_tts"
>>> processor = SpeechT5Processor.from_pretrained(checkpoint)
```
### Text cleanup for SpeechT5 tokenization
ãŸãã¯ããã¹ãããŒã¿ãã¯ãªãŒã³ã¢ããããããšããå§ããŸããããã¹ããåŠçããã«ã¯ãããã»ããµã®ããŒã¯ãã€ã¶ãŒéšåãå¿
èŠã§ãã
```py
>>> tokenizer = processor.tokenizer
```
ããŒã¿ã»ããã®äŸã«ã¯ã`raw_text`æ©èœãš `normalized_text`æ©èœãå«ãŸããŠããŸããããã¹ãå
¥åãšããŠã©ã®æ©èœã䜿çšããããæ±ºãããšãã¯ã
SpeechT5 ããŒã¯ãã€ã¶ãŒã«ã¯æ°å€ã®ããŒã¯ã³ããªãããšãèæ
®ããŠãã ããã `normalized_text`ã«ã¯æ°åãæžãããŠããŸã
ããã¹ããšããŠåºåããŸãããããã£ãŠãããã¯ããé©åã§ãããå
¥åããã¹ããšã㊠`normalized_text` ã䜿çšããããšããå§ãããŸãã
SpeechT5 ã¯è±èªã§ãã¬ãŒãã³ã°ãããŠããããããªã©ã³ãèªã®ããŒã¿ã»ããå
ã®ç¹å®ã®æåãèªèããªãå¯èœæ§ããããŸãããã
æ®ã£ãŠããããã«ããããã®æå㯠`<unk>`ããŒã¯ã³ã«å€æãããŸãããã ãããªã©ã³ãèªã§ã¯ã`à `ãªã©ã®ç¹å®ã®æåã¯
é³ç¯ã匷調ããããšã«æ
£ããŠããŸããããã¹ãã®æå³ãä¿æããããã«ããã®æåãéåžžã®`a`ã«çœ®ãæããããšãã§ããŸãã
ãµããŒããããŠããªãããŒã¯ã³ãèå¥ããã«ã¯ã`SpeechT5Tokenizer`ã䜿çšããŠããŒã¿ã»ããå
ã®ãã¹ãŠã®äžæã®æåãæœåºããŸãã
æåãããŒã¯ã³ãšããŠæ±ããŸãããããè¡ãã«ã¯ã以äžãé£çµãã `extract_all_chars` ãããã³ã°é¢æ°ãäœæããŸãã
ãã¹ãŠã®äŸããã®è»¢åã 1 ã€ã®æååã«ãŸãšãããããæåã»ããã«å€æããŸãã
ãã¹ãŠã®æåèµ·ãããäžåºŠã«å©çšã§ããããã«ã`dataset.map()`ã§`bââatched=True`ãš`batch_size=-1`ãå¿
ãèšå®ããŠãã ããã
ãããã³ã°æ©èœã
```py
>>> def extract_all_chars(batch):
... all_text = " ".join(batch["normalized_text"])
... vocab = list(set(all_text))
... return {"vocab": [vocab], "all_text": [all_text]}
>>> vocabs = dataset.map(
... extract_all_chars,
... batched=True,
... batch_size=-1,
... keep_in_memory=True,
... remove_columns=dataset.column_names,
... )
>>> dataset_vocab = set(vocabs["vocab"][0])
>>> tokenizer_vocab = {k for k, _ in tokenizer.get_vocab().items()}
```
ããã§ã2 ã€ã®æåã»ãããã§ããŸããã1 ã€ã¯ããŒã¿ã»ããã®èªåœãæã¡ããã 1 ã€ã¯ããŒã¯ãã€ã¶ãŒã®èªåœãæã¡ãŸãã
ããŒã¿ã»ããå
ã§ãµããŒããããŠããªãæåãç¹å®ããã«ã¯ãããã 2 ã€ã®ã»ããã®å·®åãåãããšãã§ããŸããçµæãšããŠ
set ã«ã¯ãããŒã¿ã»ããã«ã¯ãããããŒã¯ãã€ã¶ãŒã«ã¯å«ãŸããŠããªãæåãå«ãŸããŸãã
```py
>>> dataset_vocab - tokenizer_vocab
{' ', 'à ', 'ç', 'Ú', 'ë', 'Ã', 'ï', 'ö', 'ÃŒ'}
```
åã®æé ã§ç¹å®ããããµããŒããããŠããªãæåãåŠçããã«ã¯ããããã®æåã
æå¹ãªããŒã¯ã³ãã¹ããŒã¹ã¯ããŒã¯ãã€ã¶ãŒã§ãã§ã« `â` ã«çœ®ãæããããŠãããããåå¥ã«åŠçããå¿
èŠããªãããšã«æ³šæããŠãã ããã
```py
>>> replacements = [
... ("Ã ", "a"),
... ("ç", "c"),
... ("Ú", "e"),
... ("ë", "e"),
... ("Ã", "i"),
... ("ï", "i"),
... ("ö", "o"),
... ("Ì", "u"),
... ]
>>> def cleanup_text(inputs):
... for src, dst in replacements:
... inputs["normalized_text"] = inputs["normalized_text"].replace(src, dst)
... return inputs
>>> dataset = dataset.map(cleanup_text)
```
ããã¹ãå
ã®ç¹æ®æåãæ±ã£ãã®ã§ãä»åºŠã¯é³å£°ããŒã¿ã«çŠç¹ãç§»ããŸãã
### Speakers
VoxPopuli ããŒã¿ã»ããã«ã¯è€æ°ã®è©±è
ã®é³å£°ãå«ãŸããŠããŸãããããŒã¿ã»ããã«ã¯äœäººã®è©±è
ãå«ãŸããŠããã®ã§ãããã?ã«
ãããæ±ºå®ãããšãäžæã®è©±è
ã®æ°ãšãå話è
ãããŒã¿ã»ããã«å¯äžããäŸã®æ°ãæ°ããããšãã§ããŸãã
ããŒã¿ã»ããã«ã¯åèš 20,968 åã®äŸãå«ãŸããŠããããã®æ
å ±ã«ãããååžãããæ·±ãçè§£ã§ããããã«ãªããŸãã
è¬æŒè
ãšããŒã¿å
ã®äŸã
```py
>>> from collections import defaultdict
>>> speaker_counts = defaultdict(int)
>>> for speaker_id in dataset["speaker_id"]:
... speaker_counts[speaker_id] += 1
```
ãã¹ãã°ã©ã ããããããããšãå話è
ã«ã©ãã ãã®ããŒã¿ãããããææ¡ã§ããŸãã
```py
>>> import matplotlib.pyplot as plt
>>> plt.figure()
>>> plt.hist(speaker_counts.values(), bins=20)
>>> plt.ylabel("Speakers")
>>> plt.xlabel("Examples")
>>> plt.show()
```
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/tts_speakers_histogram.png" alt="Speakers histogram"/>
</div>
ãã¹ãã°ã©ã ãããããŒã¿ã»ããå
ã®è©±è
ã®çŽ 3 åã® 1 ã®äŸã 100 æªæºã§ããããšãããããŸãã
çŽ 10 人ã®è¬æŒè
ã 500 以äžã®äŸãæã£ãŠããŸãããã¬ãŒãã³ã°å¹çãåäžãããããŒã¿ã»ããã®ãã©ã³ã¹ããšãããã«ã次ã®ããšãå¶éã§ããŸãã
100 ïœ 400 åã®äŸãå«ãããŒã¿ãè¬æŒè
ã«æäŸããŸãã
```py
>>> def select_speaker(speaker_id):
... return 100 <= speaker_counts[speaker_id] <= 400
>>> dataset = dataset.filter(select_speaker, input_columns=["speaker_id"])
```
æ®ãã®ã¹ããŒã«ãŒã®æ°ã確èªããŠã¿ãŸãããã
```py
>>> len(set(dataset["speaker_id"]))
42
```
æ®ãã®äŸãããã€ãããèŠãŠã¿ãŸãããã
```py
>>> len(dataset)
9973
```
çŽ 40 人ã®ãŠããŒã¯ãªè¬æŒè
ããã® 10,000 匱ã®äŸãæ®ããŸãããããã§ååã§ãã
äŸãå°ãªãã¹ããŒã«ãŒã®äžã«ã¯ãäŸãé·ãå Žåãå®éã«ã¯ããå€ãã®é³å£°ãå©çšã§ããå Žåãããããšã«æ³šæããŠãã ããããããã
å話è
ã®é³å£°ã®åèšéãæ±ºå®ããã«ã¯ãããŒã¿ã»ããå
šäœãã¹ãã£ã³ããå¿
èŠããããŸãã
åãªãŒãã£ãª ãã¡ã€ã«ã®ããŒããšãã³ãŒãã䌎ãæéã®ãããããã»ã¹ããã®ãããããã§ã¯ãã®ã¹ããããã¹ãããããããšã«ããŸããã
### Speaker embeddings
TTS ã¢ãã«ãè€æ°ã®ã¹ããŒã«ãŒãåºå¥ã§ããããã«ããã«ã¯ããµã³ãã«ããšã«ã¹ããŒã«ãŒã®åã蟌ã¿ãäœæããå¿
èŠããããŸãã
ã¹ããŒã«ãŒã®åã蟌ã¿ã¯ãç¹å®ã®ã¹ããŒã«ãŒã®é³å£°ç¹æ§ããã£ããã£ããã¢ãã«ãžã®è¿œå å
¥åã§ãã
ãããã®ã¹ããŒã«ãŒåã蟌ã¿ãçæããã«ã¯ãäºåãã¬ãŒãã³ã°ããã [spkrec-xvect-voxceleb](https://huggingface.co/speechbrain/spkrec-xvect-voxceleb) ã䜿çšããŸãã
SpeechBrain ã®ã¢ãã«ã
å
¥åãªãŒãã£ãªæ³¢åœ¢ãåãåãã512 èŠçŽ ã®ãã¯ãã«ãåºåãã颿° `create_speaker_embedding()` ãäœæããŸãã
察å¿ããã¹ããŒã«ãŒåã蟌ã¿ãå«ãŸããŸãã
```py
>>> import os
>>> import torch
>>> from speechbrain.pretrained import EncoderClassifier
>>> spk_model_name = "speechbrain/spkrec-xvect-voxceleb"
>>> device = "cuda" if torch.cuda.is_available() else "cpu"
>>> speaker_model = EncoderClassifier.from_hparams(
... source=spk_model_name,
... run_opts={"device": device},
... savedir=os.path.join("/tmp", spk_model_name),
... )
>>> def create_speaker_embedding(waveform):
... with torch.no_grad():
... speaker_embeddings = speaker_model.encode_batch(torch.tensor(waveform))
... speaker_embeddings = torch.nn.functional.normalize(speaker_embeddings, dim=2)
... speaker_embeddings = speaker_embeddings.squeeze().cpu().numpy()
... return speaker_embeddings
```
`speechbrain/spkrec-xvect-voxceleb`ã¢ãã«ã¯ãVoxCeleb ããã®è±èªé³å£°ã§ãã¬ãŒãã³ã°ãããããšã«æ³šæããããšãéèŠã§ãã
ããŒã¿ã»ããã§ããããã®ã¬ã€ãã®ãã¬ãŒãã³ã°äŸã¯ãªã©ã³ãèªã§ãããã®ã¢ãã«ã¯ä»åŸãçæããããšä¿¡ããŠããŸããã
ãªã©ã³ãèªã®ããŒã¿ã»ããã«é©åãªè©±è
åã蟌ã¿ãè¡ã£ãŠãããã®ä»®å®ã¯ãã¹ãŠã®å Žåã«åœãŠã¯ãŸããªãå¯èœæ§ããããŸãã
æé©ãªçµæãåŸãã«ã¯ãæåã«ã¿ãŒã²ããé³å£°ã§ X ãã¯ãã« ã¢ãã«ããã¬ãŒãã³ã°ããããšããå§ãããŸããããã«ãããã¢ãã«ã確å®ã«
ãªã©ã³ãèªã«ååšããç¬ç¹ã®é³å£°ç¹åŸŽãããããæããããšãã§ããŸãã
### Processing the dataset
æåŸã«ãã¢ãã«ãæåŸ
ãã圢åŒã«ããŒã¿ãåŠçããŸãããããåã蟌ã `prepare_dataset` 颿°ãäœæããŸãã
ãã㯠1 ã€ã®äŸã§ããã`SpeechT5Processor` ãªããžã§ã¯ãã䜿çšããŠå
¥åããã¹ããããŒã¯ã³åããã¿ãŒã²ãã ãªãŒãã£ãªããã°ã¡ã« ã¹ãã¯ããã°ã©ã ã«ããŒãããŸãã
ãŸãã远å ã®å
¥åãšããŠã¹ããŒã«ãŒã®åã蟌ã¿ã远å ããå¿
èŠããããŸãã
```py
>>> def prepare_dataset(example):
... audio = example["audio"]
... example = processor(
... text=example["normalized_text"],
... audio_target=audio["array"],
... sampling_rate=audio["sampling_rate"],
... return_attention_mask=False,
... )
... # strip off the batch dimension
... example["labels"] = example["labels"][0]
... # use SpeechBrain to obtain x-vector
... example["speaker_embeddings"] = create_speaker_embedding(audio["array"])
... return example
```
åäžã®äŸãèŠãŠãåŠçãæ£ããããšã確èªããŸãã
```py
>>> processed_example = prepare_dataset(dataset[0])
>>> list(processed_example.keys())
['input_ids', 'labels', 'stop_labels', 'speaker_embeddings']
```
ã¹ããŒã«ãŒã®ãšã³ããã£ã³ã°ã¯ 512 èŠçŽ ã®ãã¯ãã«ã§ããå¿
èŠããããŸãã
```py
>>> processed_example["speaker_embeddings"].shape
(512,)
```
ã©ãã«ã¯ã80 ã¡ã« ãã³ãå«ããã°ã¡ã« ã¹ãã¯ããã°ã©ã ã§ããå¿
èŠããããŸãã
```py
>>> import matplotlib.pyplot as plt
>>> plt.figure()
>>> plt.imshow(processed_example["labels"].T)
>>> plt.show()
```
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/tts_logmelspectrogram_1.png" alt="Log-mel spectrogram with 80 mel bins"/>
</div>
è£è¶³: ãã®ã¹ãã¯ããã°ã©ã ããããã«ãããšæããå Žåã¯ãäœåšæ³¢ãé
眮ããèŠåã«æ
£ããŠããããšãåå ã§ããå¯èœæ§ããããŸãã
ããããã®äžéšã«é«åšæ³¢ãäžéšã«é«åšæ³¢ã衚瀺ãããŸãããã ããmatplotlib ã©ã€ãã©ãªã䜿çšããŠã¹ãã¯ããã°ã©ã ãç»åãšããŠããããããå Žåã
Y 軞ãå転ãããã¹ãã¯ããã°ã©ã ãäžäžéã«è¡šç€ºãããŸãã
次ã«ãåŠç颿°ãããŒã¿ã»ããå
šäœã«é©çšããŸããããã«ã¯ 5 ïœ 10 åããããŸãã
```py
>>> dataset = dataset.map(prepare_dataset, remove_columns=dataset.column_names)
```
ããŒã¿ã»ããå
ã®äžéšã®äŸããã¢ãã«ãåŠçã§ããæå€§å
¥åé· (600 ããŒã¯ã³) ãè¶
ããŠããããšã瀺ãèŠåã衚瀺ãããŸãã
ãããã®äŸãããŒã¿ã»ããããåé€ããŸããããã§ã¯ããã«é²ãã§ããã倧ããªããã ãµã€ãºãå¯èœã«ããããã«ã200 ããŒã¯ã³ãè¶
ãããã®ã¯ãã¹ãŠåé€ããŸãã
```py
>>> def is_not_too_long(input_ids):
... input_length = len(input_ids)
... return input_length < 200
>>> dataset = dataset.filter(is_not_too_long, input_columns=["input_ids"])
>>> len(dataset)
8259
```
次ã«ãåºæ¬çãªãã¬ãŒãã³ã°/ãã¹ãåå²ãäœæããŸãã
```py
>>> dataset = dataset.train_test_split(test_size=0.1)
```
### Data collator
è€æ°ã®äŸã 1 ã€ã®ãããã«çµåããã«ã¯ãã«ã¹ã¿ã ããŒã¿ç
§ååšãå®çŸ©ããå¿
èŠããããŸãããã®ã³ã¬ãŒã¿ãŒã¯ãçãã·ãŒã±ã³ã¹ãããã£ã³ã°ã§åã蟌ã¿ãŸãã
ããŒã¯ã³ã䜿çšããŠããã¹ãŠã®äŸãåãé·ãã«ãªãããã«ããŸããã¹ãã¯ããã°ã©ã ã©ãã«ã®å Žåãåã蟌ãŸããéšåã¯ç¹å¥ãªå€ `-100` ã«çœ®ãæããããŸãããã®ç¹å¥ãªäŸ¡å€ã¯
ã¹ãã¯ããã°ã©ã æå€±ãèšç®ãããšãã«ãã¹ãã¯ããã°ã©ã ã®ãã®éšåãç¡èŠããããã«ã¢ãã«ã«æç€ºããŸãã
```py
>>> from dataclasses import dataclass
>>> from typing import Any, Dict, List, Union
>>> @dataclass
... class TTSDataCollatorWithPadding:
... processor: Any
... def __call__(self, features: List[Dict[str, Union[List[int], torch.Tensor]]]) -> Dict[str, torch.Tensor]:
... input_ids = [{"input_ids": feature["input_ids"]} for feature in features]
... label_features = [{"input_values": feature["labels"]} for feature in features]
... speaker_features = [feature["speaker_embeddings"] for feature in features]
... # collate the inputs and targets into a batch
... batch = processor.pad(input_ids=input_ids, labels=label_features, return_tensors="pt")
... # replace padding with -100 to ignore loss correctly
... batch["labels"] = batch["labels"].masked_fill(batch.decoder_attention_mask.unsqueeze(-1).ne(1), -100)
... # not used during fine-tuning
... del batch["decoder_attention_mask"]
... # round down target lengths to multiple of reduction factor
... if model.config.reduction_factor > 1:
... target_lengths = torch.tensor([len(feature["input_values"]) for feature in label_features])
... target_lengths = target_lengths.new(
... [length - length % model.config.reduction_factor for length in target_lengths]
... )
... max_length = max(target_lengths)
... batch["labels"] = batch["labels"][:, :max_length]
... # also add in the speaker embeddings
... batch["speaker_embeddings"] = torch.tensor(speaker_features)
... return batch
```
SpeechT5 ã§ã¯ãã¢ãã«ã®ãã³ãŒãéšåãžã®å
¥åã 2 åã® 1 ã«åæžãããŸããã€ãŸãããã¹ãŠã®ããŒã¿ãç Žæ£ãããŸãã
ã¿ãŒã²ãã ã·ãŒã±ã³ã¹ããã®ä»ã®ã¿ã€ã ã¹ããããæ¬¡ã«ããã³ãŒã㯠2 åã®é·ãã®ã·ãŒã±ã³ã¹ãäºæž¬ããŸãããªãªãžãã«ä»¥æ¥
ã¿ãŒã²ãã ã·ãŒã±ã³ã¹ã®é·ãã奿°ã§ããå¯èœæ§ãããå ŽåãããŒã¿ç
§åæ©èœã¯ãããã®æå€§é·ãåãæšãŠãŠã
2ã®åæ°ã
```py
>>> data_collator = TTSDataCollatorWithPadding(processor=processor)
```
## Train the model
ããã»ããµã®ããŒãã«äœ¿çšããã®ãšåããã§ãã¯ãã€ã³ãããäºåãã¬ãŒãã³ã°ãããã¢ãã«ãããŒãããŸãã
```py
>>> from transformers import SpeechT5ForTextToSpeech
>>> model = SpeechT5ForTextToSpeech.from_pretrained(checkpoint)
```
`use_cache=True`ãªãã·ã§ã³ã¯ãåŸé
ãã§ãã¯ãã€ã³ããšäºææ§ããããŸããããã¬ãŒãã³ã°ã®ããã«ç¡å¹ã«ããŸãã
```py
>>> model.config.use_cache = False
```
ãã¬ãŒãã³ã°åŒæ°ãå®çŸ©ããŸããããã§ã¯ããã¬ãŒãã³ã° ããã»ã¹äžã«è©äŸ¡ã¡ããªã¯ã¹ãèšç®ããŠããŸããã代ããã«ã
æå€±ã ããèŠãŠãã ããã
```python
>>> from transformers import Seq2SeqTrainingArguments
>>> training_args = Seq2SeqTrainingArguments(
... output_dir="speecht5_finetuned_voxpopuli_nl", # change to a repo name of your choice
... per_device_train_batch_size=4,
... gradient_accumulation_steps=8,
... learning_rate=1e-5,
... warmup_steps=500,
... max_steps=4000,
... gradient_checkpointing=True,
... fp16=True,
... evaluation_strategy="steps",
... per_device_eval_batch_size=2,
... save_steps=1000,
... eval_steps=1000,
... logging_steps=25,
... report_to=["tensorboard"],
... load_best_model_at_end=True,
... greater_is_better=False,
... label_names=["labels"],
... push_to_hub=True,
... )
```
`Trainer`ãªããžã§ã¯ããã€ã³ã¹ã¿ã³ã¹åããã¢ãã«ãããŒã¿ã»ãããããŒã¿ç
§ååšãããã«æž¡ããŸãã
```py
>>> from transformers import Seq2SeqTrainer
>>> trainer = Seq2SeqTrainer(
... args=training_args,
... model=model,
... train_dataset=dataset["train"],
... eval_dataset=dataset["test"],
... data_collator=data_collator,
... tokenizer=processor,
... )
```
ããã§ããã¬ãŒãã³ã°ãéå§ããæºåãæŽããŸããããã¬ãŒãã³ã°ã«ã¯æ°æéããããŸãã GPU ã«å¿ããŠã
ãã¬ãŒãã³ã°ãéå§ãããšãã«ãCUDA ã®ãã¡ã¢ãªäžè¶³ããšã©ãŒãçºçããå¯èœæ§ããããŸãããã®å Žåãæžããããšãã§ããŸã
`per_device_train_batch_size`ã 2 åã«å¢åãã`gradient_accumulation_steps`ã 2 åã«å¢ãããŠè£æ£ããŸãã
```py
>>> trainer.train()
```
ãã€ãã©ã€ã³ã§ãã§ãã¯ãã€ã³ãã䜿çšã§ããããã«ããã«ã¯ãå¿
ãããã»ããµããã§ãã¯ãã€ã³ããšãšãã«ä¿åããŠãã ããã
```py
>>> processor.save_pretrained("YOUR_ACCOUNT_NAME/speecht5_finetuned_voxpopuli_nl")
```
æçµã¢ãã«ã ð€ ããã«ããã·ã¥ããŸãã
```py
>>> trainer.push_to_hub()
```
## Inference
### Inference with a pipeline
ã¢ãã«ã埮調æŽããã®ã§ããããæšè«ã«äœ¿çšã§ããããã«ãªããŸããã
ãŸãã察å¿ãããã€ãã©ã€ã³ã§ããã䜿çšããæ¹æ³ãèŠãŠã¿ãŸãããã `"text-to-speech"` ãã€ãã©ã€ã³ãäœæããŸããã
ãã§ãã¯ãã€ã³ã:
```py
>>> from transformers import pipeline
>>> pipe = pipeline("text-to-speech", model="YOUR_ACCOUNT_NAME/speecht5_finetuned_voxpopuli_nl")
```
ãã¬ãŒã·ã§ã³ãåžæãããªã©ã³ãèªã®ããã¹ããéžæããŠãã ãããäŸ:
```py
>>> text = "hallo allemaal, ik praat nederlands. groetjes aan iedereen!"
```
ãã€ãã©ã€ã³ã§ SpeechT5 ã䜿çšããã«ã¯ãã¹ããŒã«ãŒã®åã蟌ã¿ãå¿
èŠã§ãããã¹ã ããŒã¿ã»ããã®äŸããååŸããŠã¿ãŸãããã
```py
>>> example = dataset["test"][304]
>>> speaker_embeddings = torch.tensor(example["speaker_embeddings"]).unsqueeze(0)
```
ããã§ãããã¹ããšã¹ããŒã«ãŒã®åã蟌ã¿ããã€ãã©ã€ã³ã«æž¡ãããšãã§ããæ®ãã¯ãã€ãã©ã€ã³ãåŠçããŸãã
```py
>>> forward_params = {"speaker_embeddings": speaker_embeddings}
>>> output = pipe(text, forward_params=forward_params)
>>> output
{'audio': array([-6.82714235e-05, -4.26525949e-04, 1.06134125e-04, ...,
-1.22392643e-03, -7.76011671e-04, 3.29112721e-04], dtype=float32),
'sampling_rate': 16000}
```
ãã®åŸãçµæãèãããšãã§ããŸãã
```py
>>> from IPython.display import Audio
>>> Audio(output['audio'], rate=output['sampling_rate'])
```
### Run inference manually
ãã€ãã©ã€ã³ã䜿çšããªããŠãåãæšè«çµæãåŸãããšãã§ããŸãããããå€ãã®æé ãå¿
èŠã«ãªããŸãã
ð€ ããããã¢ãã«ãããŒãããŸãã
```py
>>> model = SpeechT5ForTextToSpeech.from_pretrained("YOUR_ACCOUNT/speecht5_finetuned_voxpopuli_nl")
```
ãã¹ã ããŒã¿ã»ããããäŸãéžæããŠãã¹ããŒã«ãŒã®åã蟌ã¿ãååŸããŸãã
```py
>>> example = dataset["test"][304]
>>> speaker_embeddings = torch.tensor(example["speaker_embeddings"]).unsqueeze(0)
```
å
¥åããã¹ããå®çŸ©ããããŒã¯ã³åããŸãã
```py
>>> text = "hallo allemaal, ik praat nederlands. groetjes aan iedereen!"
>>> inputs = processor(text=text, return_tensors="pt")
```
ã¢ãã«ã䜿çšããŠã¹ãã¯ããã°ã©ã ãäœæããŸãã
```py
>>> spectrogram = model.generate_speech(inputs["input_ids"], speaker_embeddings)
```
次ã®ããšãè¡ãå Žåã¯ãã¹ãã¯ããã°ã©ã ãèŠèŠåããŸãã
```py
>>> plt.figure()
>>> plt.imshow(spectrogram.T)
>>> plt.show()
```
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/tts_logmelspectrogram_2.png" alt="Generated log-mel spectrogram"/>
</div>
æåŸã«ããã³ãŒããŒã䜿çšããŠã¹ãã¯ããã°ã©ã ããµãŠã³ãã«å€æããŸãã
```py
>>> with torch.no_grad():
... speech = vocoder(spectrogram)
>>> from IPython.display import Audio
>>> Audio(speech.numpy(), rate=16000)
```
ç§ãã¡ã®çµéšã§ã¯ããã®ã¢ãã«ããæºè¶³ã®ããçµæãåŸãã®ã¯é£ããå ŽåããããŸããã¹ããŒã«ãŒã®å質
åã蟌ã¿ã¯éèŠãªèŠçŽ ã§ããããã§ãã SpeechT5 ã¯è±èªã® x ãã¯ãã«ã§äºåãã¬ãŒãã³ã°ãããŠãããããæé«ã®ããã©ãŒãã³ã¹ãçºæ®ããŸã
è±èªã¹ããŒã«ãŒã®åã蟌ã¿ã䜿çšããå Žåãåæé³å£°ã®é³è³ªãæªãå Žåã¯ãå¥ã®ã¹ããŒã«ãŒåã蟌ã¿ã䜿çšããŠã¿ãŠãã ããã
ãã¬ãŒãã³ã°æéãé·ããããšãçµæã®è³ªãåäžããå¯èœæ§ããããŸããããã§ãããã®ã¹ããŒãã¯æããã«è±èªã§ã¯ãªããªã©ã³ãèªã§ãã
話è
ã®é³å£°ç¹æ§ããã£ããã£ããŸã (äŸã®å
ã®é³å£°ãšæ¯èŒ)ã
ãã 1 ã€å®éšãã¹ãããšã¯ãã¢ãã«ã®æ§æã§ããããšãã°ã`config.reduction_factor = 1`ã䜿çšããŠã¿ãŠãã ããã
ããã«ããçµæãæ¹åããããã©ããã確èªããŠãã ããã
æåŸã«ãå«ççé
æ
®ãèæ
®ããããšãäžå¯æ¬ ã§ãã TTS ãã¯ãããžãŒã«ã¯æ°å€ãã®æçšãªçšéããããŸããã
ãŸããç¥ããªããã¡ã«èª°ãã®å£°ãåœè£
ãããªã©ãæªæã®ããç®çã«äœ¿çšãããå¯èœæ§ããããŸãããé¡ãããŸã
TTS ã¯è³¢æãã€è²¬ä»»ãæã£ãŠäœ¿çšããŠãã ããã
| 0 |
hf_public_repos/transformers/docs/source/ja | hf_public_repos/transformers/docs/source/ja/tasks/audio_classification.md | <!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Audio classification
[[open-in-colab]]
<Youtube id="KWwzcmG98Ds"/>
é³å£°åé¡ã§ã¯ãããã¹ããšåæ§ã«ãå
¥åããŒã¿ããåºåãããã¯ã©ã¹ ã©ãã«ãå²ãåœãŠãŸããå¯äžã®éãã¯ãããã¹ãå
¥åã®ä»£ããã«çã®ãªãŒãã£ãªæ³¢åœ¢ãããããšã§ããé³å£°åé¡ã®å®éçãªå¿çšäŸã«ã¯ã話è
ã®æå³ãèšèªåé¡ãããã«ã¯é³ã«ããåç©ã®çš®é¡ã®èå¥ãªã©ããããŸãã
ãã®ã¬ã€ãã§ã¯ãæ¬¡ã®æ¹æ³ã説æããŸãã
1. [MInDS-14](https://huggingface.co/datasets/PolyAI/minds14) ããŒã¿ã»ããã§ [Wav2Vec2](https://huggingface.co/facebook/wav2vec2-base) ã埮調æŽããŠè©±è
ã®æå³ãåé¡ããŸãã
2. 埮調æŽããã¢ãã«ãæšè«ã«äœ¿çšããŸãã
<Tip>
ãã®ãã¥ãŒããªã¢ã«ã§èª¬æããã¿ã¹ã¯ã¯ã次ã®ã¢ãã« ã¢ãŒããã¯ãã£ã§ãµããŒããããŠããŸãã
<!--This tip is automatically generated by `make fix-copies`, do not fill manually!-->
[Audio Spectrogram Transformer](../model_doc/audio-spectrogram-transformer), [Data2VecAudio](../model_doc/data2vec-audio), [Hubert](../model_doc/hubert), [SEW](../model_doc/sew), [SEW-D](../model_doc/sew-d), [UniSpeech](../model_doc/unispeech), [UniSpeechSat](../model_doc/unispeech-sat), [Wav2Vec2](../model_doc/wav2vec2), [Wav2Vec2-Conformer](../model_doc/wav2vec2-conformer), [WavLM](../model_doc/wavlm), [Whisper](../model_doc/whisper)
<!--End of the generated tip-->
</Tip>
å§ããåã«ãå¿
èŠãªã©ã€ãã©ãªããã¹ãŠã€ã³ã¹ããŒã«ãããŠããããšã確èªããŠãã ããã
```bash
pip install transformers datasets evaluate
```
ã¢ãã«ãã¢ããããŒãããŠã³ãã¥ããã£ãšå
±æã§ããããã«ãHugging Face ã¢ã«ãŠã³ãã«ãã°ã€ã³ããããšããå§ãããŸããããã³ããã衚瀺ãããããããŒã¯ã³ãå
¥åããŠãã°ã€ã³ããŸãã
```py
>>> from huggingface_hub import notebook_login
>>> notebook_login()
```
## Load MInDS-14 dataset
ãŸããð€ ããŒã¿ã»ãã ã©ã€ãã©ãªãã MInDS-14 ããŒã¿ã»ãããããŒãããŸãã
```py
>>> from datasets import load_dataset, Audio
>>> minds = load_dataset("PolyAI/minds14", name="en-US", split="train")
```
[`~datasets.Dataset.train_test_split`] ã¡ãœããã䜿çšããŠãããŒã¿ã»ããã® `train` ãããå°ããªãã¬ã€ã³ãšãã¹ã ã»ããã«åå²ããŸããããã«ãããå®å
šãªããŒã¿ã»ããã«ããã«æéãè²»ããåã«ãå®éšããŠãã¹ãŠãæ©èœããããšã確èªããæ©äŒãåŸãããŸãã
```py
>>> minds = minds.train_test_split(test_size=0.2)
```
次ã«ãããŒã¿ã»ãããèŠãŠã¿ãŸãããã
```py
>>> minds
DatasetDict({
train: Dataset({
features: ['path', 'audio', 'transcription', 'english_transcription', 'intent_class', 'lang_id'],
num_rows: 450
})
test: Dataset({
features: ['path', 'audio', 'transcription', 'english_transcription', 'intent_class', 'lang_id'],
num_rows: 113
})
})
```
ããŒã¿ã»ããã«ã¯`lang_id`ã`english_transcription`ãªã©ã®å€ãã®æçšãªæ
å ±ãå«ãŸããŠããŸããããã®ã¬ã€ãã§ã¯`audio`ãš`intent_class`ã«çŠç¹ãåœãŠãŸãã [`~datasets.Dataset.remove_columns`] ã¡ãœããã䜿çšããŠä»ã®åãåé€ããŸãã
```py
>>> minds = minds.remove_columns(["path", "transcription", "english_transcription", "lang_id"])
```
ããã§äŸãèŠãŠã¿ãŸãããã
```py
>>> minds["train"][0]
{'audio': {'array': array([ 0. , 0. , 0. , ..., -0.00048828,
-0.00024414, -0.00024414], dtype=float32),
'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~APP_ERROR/602b9a5fbb1e6d0fbce91f52.wav',
'sampling_rate': 8000},
'intent_class': 2}
```
次㮠2 ã€ã®ãã£ãŒã«ãããããŸãã
- `audio`: é³å£°ãã¡ã€ã«ãããŒãããŠãªãµã³ããªã³ã°ããããã«åŒã³åºãå¿
èŠãããé³å£°ä¿¡å·ã® 1 次å
ã® `array`ã
- `intent_class`: ã¹ããŒã«ãŒã®ã€ã³ãã³ãã®ã¯ã©ã¹ ID ã衚ããŸãã
ã¢ãã«ãã©ãã« ID ããã©ãã«åãååŸããããããããã«ãã©ãã«åãæŽæ°ã«ããŸãã¯ãã®éã«ãããããèŸæžãäœæããŸãã
```py
>>> labels = minds["train"].features["intent_class"].names
>>> label2id, id2label = dict(), dict()
>>> for i, label in enumerate(labels):
... label2id[label] = str(i)
... id2label[str(i)] = label
```
ããã§ãã©ãã« ID ãã©ãã«åã«å€æã§ããããã«ãªããŸããã
```py
>>> id2label[str(2)]
'app_error'
```
## Preprocess
次ã®ã¹ãããã§ã¯ãWav2Vec2 ç¹åŸŽæœåºããã°ã©ã ãããŒãããŠãªãŒãã£ãªä¿¡å·ãåŠçããŸãã
```py
>>> from transformers import AutoFeatureExtractor
>>> feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/wav2vec2-base")
```
MInDS-14 ããŒã¿ã»ããã®ãµã³ããªã³ã° ã¬ãŒã㯠8000khz ã§ã (ãã®æ
å ±ã¯ [ããŒã¿ã»ãã ã«ãŒã](https://huggingface.co/datasets/PolyAI/minds14) ã§ç¢ºèªã§ããŸã)ãã€ãŸããããŒã¿ã»ãããåãµã³ããªã³ã°ããå¿
èŠããããŸããäºåãã¬ãŒãã³ã°ããã Wav2Vec2 ã¢ãã«ã䜿çšããã«ã¯ã16000kHz ã«èšå®ããŸãã
```py
>>> minds = minds.cast_column("audio", Audio(sampling_rate=16_000))
>>> minds["train"][0]
{'audio': {'array': array([ 2.2098757e-05, 4.6582241e-05, -2.2803260e-05, ...,
-2.8419291e-04, -2.3305941e-04, -1.1425107e-04], dtype=float32),
'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~APP_ERROR/602b9a5fbb1e6d0fbce91f52.wav',
'sampling_rate': 16000},
'intent_class': 2}
```
次ã«ã次ã®ååŠç颿°ãäœæããŸãã
1. `audio`åãåŒã³åºããŠããŒãããå¿
èŠã«å¿ããŠãªãŒãã£ãª ãã¡ã€ã«ããªãµã³ããªã³ã°ããŸãã
2. ãªãŒãã£ãª ãã¡ã€ã«ã®ãµã³ããªã³ã° ã¬ãŒãããã¢ãã«ãäºåãã¬ãŒãã³ã°ããããªãŒãã£ãª ããŒã¿ã®ãµã³ããªã³ã° ã¬ãŒããšäžèŽãããã©ããã確èªããŸãããã®æ
å ±ã¯ãWav2Vec2 [ã¢ãã« ã«ãŒã](https://huggingface.co/facebook/wav2vec2-base) ã§èŠã€ããããšãã§ããŸãã
3. å
¥åã®æå€§é·ãèšå®ããŠãé·ãå
¥åãåãæšãŠãã«ãããåŠçããŸãã
```py
>>> def preprocess_function(examples):
... audio_arrays = [x["array"] for x in examples["audio"]]
... inputs = feature_extractor(
... audio_arrays, sampling_rate=feature_extractor.sampling_rate, max_length=16000, truncation=True
... )
... return inputs
```
ããŒã¿ã»ããå
šäœã«ååŠç颿°ãé©çšããã«ã¯ãð€ Datasets [`~datasets.Dataset.map`] 颿°ã䜿çšããŸãã `batched=True` ãèšå®ããŠããŒã¿ã»ããã®è€æ°ã®èŠçŽ ãäžåºŠã«åŠçããããšã§ã`map` ãé«éåã§ããŸããäžèŠãªåãåé€ãã`intent_class` ã®ååã `label` ã«å€æŽããŸããããã¯ã¢ãã«ãæåŸ
ããååã§ããããã§ãã
```py
>>> encoded_minds = minds.map(preprocess_function, remove_columns="audio", batched=True)
>>> encoded_minds = encoded_minds.rename_column("intent_class", "label")
```
## Evaluate
ãã¬ãŒãã³ã°äžã«ã¡ããªã¯ã¹ãå«ãããšãå€ãã®å Žåãã¢ãã«ã®ããã©ãŒãã³ã¹ãè©äŸ¡ããã®ã«åœ¹ç«ã¡ãŸãã ð€ [Evaluate](https://huggingface.co/docs/evaluate/index) ã©ã€ãã©ãªã䜿çšããŠãè©äŸ¡ã¡ãœããããã°ããããŒãã§ããŸãããã®ã¿ã¹ã¯ã§ã¯ã[accuracy](https://huggingface.co/spaces/evaluate-metric/accuracy) ã¡ããªã¯ã¹ãèªã¿èŸŒã¿ãŸã (ð€ Evaluate [ã¯ã€ã㯠ãã¢ãŒ](https://huggingface.co/docs/evaluate/a_quick_tour) ãåç
§ããŠãã ãã) ã¡ããªã¯ã¹ã®èªã¿èŸŒã¿ãšèšç®æ¹æ³ã®è©³çްã«ã€ããŠã¯ã次ãåç
§ããŠãã ããã
```py
>>> import evaluate
>>> accuracy = evaluate.load("accuracy")
```
次ã«ãäºæž¬ãšã©ãã«ã [`~evaluate.EvaluationModule.compute`] ã«æž¡ããŠç²ŸåºŠãèšç®ãã颿°ãäœæããŸãã
```py
>>> import numpy as np
>>> def compute_metrics(eval_pred):
... predictions = np.argmax(eval_pred.predictions, axis=1)
... return accuracy.compute(predictions=predictions, references=eval_pred.label_ids)
```
ããã§`compute_metrics`颿°ã®æºåãæŽããŸããããã¬ãŒãã³ã°ãã»ããã¢ãããããšãã«ãã®é¢æ°ã«æ»ããŸãã
## Train
<frameworkcontent>
<pt>
<Tip>
[`Trainer`] ã䜿çšããã¢ãã«ã®åŸ®èª¿æŽã«æ
£ããŠããªãå Žåã¯ã[ãã¡ã](../training#train-with-pytorch-trainer) ã®åºæ¬çãªãã¥ãŒããªã¢ã«ãã芧ãã ããã
</Tip>
ããã§ã¢ãã«ã®ãã¬ãŒãã³ã°ãéå§ããæºåãæŽããŸããã [`AutoModelForAudioClassification`] ã䜿çšããŠãäºæãããã©ãã«ã®æ°ãšã©ãã« ãããã³ã°ã䜿çšã㊠Wav2Vec2 ãèªã¿èŸŒã¿ãŸãã
```py
>>> from transformers import AutoModelForAudioClassification, TrainingArguments, Trainer
>>> num_labels = len(id2label)
>>> model = AutoModelForAudioClassification.from_pretrained(
... "facebook/wav2vec2-base", num_labels=num_labels, label2id=label2id, id2label=id2label
... )
```
ãã®æç¹ã§æ®ã£ãŠããæé ã¯æ¬¡ã® 3 ã€ã ãã§ãã
1. [`TrainingArguments`] ã§ãã¬ãŒãã³ã° ãã€ããŒãã©ã¡ãŒã¿ãå®çŸ©ããŸããå¯äžã®å¿
é ãã©ã¡ãŒã¿ã¯ãã¢ãã«ã®ä¿åå Žæãæå®ãã `output_dir` ã§ãã `push_to_hub=True`ãèšå®ããŠããã®ã¢ãã«ãããã«ããã·ã¥ããŸã (ã¢ãã«ãã¢ããããŒãããã«ã¯ãHugging Face ã«ãµã€ã³ã€ã³ããå¿
èŠããããŸã)ãåãšããã¯ã®çµäºæã«ã[`ãã¬ãŒããŒ`] ã¯ç²ŸåºŠãè©äŸ¡ãããã¬ãŒãã³ã° ãã§ãã¯ãã€ã³ããä¿åããŸãã
2. ãã¬ãŒãã³ã°åŒæ°ããã¢ãã«ãããŒã¿ã»ãããããŒã¯ãã€ã¶ãŒãããŒã¿ç
§ååšãããã³ `compute_metrics` 颿°ãšãšãã« [`Trainer`] ã«æž¡ããŸãã
3. [`~Trainer.train`] ãåŒã³åºããŠã¢ãã«ã埮調æŽããŸãã
```py
>>> training_args = TrainingArguments(
... output_dir="my_awesome_mind_model",
... evaluation_strategy="epoch",
... save_strategy="epoch",
... learning_rate=3e-5,
... per_device_train_batch_size=32,
... gradient_accumulation_steps=4,
... per_device_eval_batch_size=32,
... num_train_epochs=10,
... warmup_ratio=0.1,
... logging_steps=10,
... load_best_model_at_end=True,
... metric_for_best_model="accuracy",
... push_to_hub=True,
... )
>>> trainer = Trainer(
... model=model,
... args=training_args,
... train_dataset=encoded_minds["train"],
... eval_dataset=encoded_minds["test"],
... tokenizer=feature_extractor,
... compute_metrics=compute_metrics,
... )
>>> trainer.train()
```
ãã¬ãŒãã³ã°ãå®äºãããã [`~transformers.Trainer.push_to_hub`] ã¡ãœããã䜿çšããŠã¢ãã«ãããã«å
±æãã誰ããã¢ãã«ã䜿çšã§ããããã«ããŸãã
```py
>>> trainer.push_to_hub()
```
</pt>
</frameworkcontent>
<Tip>
é³å£°åé¡çšã®ã¢ãã«ã埮調æŽããæ¹æ³ã®è©³çްãªäŸã«ã€ããŠã¯ã察å¿ãã [PyTorch notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/audio_classification.ipynb).
</Tip>
## Inference
ã¢ãã«ã埮調æŽããã®ã§ããããæšè«ã«äœ¿çšã§ããããã«ãªããŸããã
æšè«ãå®è¡ãããé³å£°ãã¡ã€ã«ãããŒãããŸããå¿
èŠã«å¿ããŠããªãŒãã£ãª ãã¡ã€ã«ã®ãµã³ããªã³ã° ã¬ãŒããã¢ãã«ã®ãµã³ããªã³ã° ã¬ãŒããšäžèŽããããã«ãªãµã³ããªã³ã°ããããšãå¿ããªãã§ãã ããã
```py
>>> from datasets import load_dataset, Audio
>>> dataset = load_dataset("PolyAI/minds14", name="en-US", split="train")
>>> dataset = dataset.cast_column("audio", Audio(sampling_rate=16000))
>>> sampling_rate = dataset.features["audio"].sampling_rate
>>> audio_file = dataset[0]["audio"]["path"]
```
æšè«çšã«åŸ®èª¿æŽãããã¢ãã«ã詊ãæãç°¡åãªæ¹æ³ã¯ãããã [`pipeline`] ã§äœ¿çšããããšã§ããã¢ãã«ã䜿çšããŠé³å£°åé¡çšã®`pipeline`ãã€ã³ã¹ã¿ã³ã¹åããããã«é³å£°ãã¡ã€ã«ãæž¡ããŸãã
```py
>>> from transformers import pipeline
>>> classifier = pipeline("audio-classification", model="stevhliu/my_awesome_minds_model")
>>> classifier(audio_file)
[
{'score': 0.09766869246959686, 'label': 'cash_deposit'},
{'score': 0.07998877018690109, 'label': 'app_error'},
{'score': 0.0781070664525032, 'label': 'joint_account'},
{'score': 0.07667109370231628, 'label': 'pay_bill'},
{'score': 0.0755252093076706, 'label': 'balance'}
]
```
å¿
èŠã«å¿ããŠã`pipeline` ã®çµæãæåã§è€è£œããããšãã§ããŸãã
<frameworkcontent>
<pt>
ç¹åŸŽæœåºåšãããŒãããŠãªãŒãã£ãª ãã¡ã€ã«ãååŠçãã`input`ã PyTorch ãã³ãœã«ãšããŠè¿ããŸãã
```py
>>> from transformers import AutoFeatureExtractor
>>> feature_extractor = AutoFeatureExtractor.from_pretrained("stevhliu/my_awesome_minds_model")
>>> inputs = feature_extractor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt")
```
å
¥åãã¢ãã«ã«æž¡ããããžãããè¿ããŸãã
```py
>>> from transformers import AutoModelForAudioClassification
>>> model = AutoModelForAudioClassification.from_pretrained("stevhliu/my_awesome_minds_model")
>>> with torch.no_grad():
... logits = model(**inputs).logits
```
æãé«ã確çã§ã¯ã©ã¹ãååŸããã¢ãã«ã® `id2label` ãããã³ã°ã䜿çšããŠãããã©ãã«ã«å€æããŸãã
```py
>>> import torch
>>> predicted_class_ids = torch.argmax(logits).item()
>>> predicted_label = model.config.id2label[predicted_class_ids]
>>> predicted_label
'cash_deposit'
```
</pt>
</frameworkcontent> | 0 |
hf_public_repos/transformers/docs/source/ja | hf_public_repos/transformers/docs/source/ja/tasks/asr.md | <!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Automatic speech recognition
[[open-in-colab]]
<Youtube id="TksaY_FDgnk"/>
èªåé³å£°èªè (ASR) ã¯é³å£°ä¿¡å·ãããã¹ãã«å€æããäžé£ã®é³å£°å
¥åãããã¹ãåºåã«ãããã³ã°ããŸãã Siri ã Alexa ãªã©ã®ä»®æ³ã¢ã·ã¹ã¿ã³ã㯠ASR ã¢ãã«ã䜿çšããŠãŠãŒã¶ãŒãæ¥åžžçã«æ¯æŽããŠãããã©ã€ããã£ãã·ã§ã³ãäŒè°äžã®ã¡ã¢åããªã©ãä»ã«ã䟿å©ãªãŠãŒã¶ãŒåãã¢ããªã±ãŒã·ã§ã³ãæ°å€ããããŸãã
ãã®ã¬ã€ãã§ã¯ãæ¬¡ã®æ¹æ³ã説æããŸãã
1. [MInDS-14](https://huggingface.co/datasets/PolyAI/minds14) ããŒã¿ã»ããã® [Wav2Vec2](https://huggingface.co/facebook/wav2vec2-base) ã埮調æŽããŠãé³å£°ãããã¹ãã«æžãèµ·ãããŸãã
2. 埮調æŽããã¢ãã«ãæšè«ã«äœ¿çšããŸãã
<Tip>
ãã®ãã¥ãŒããªã¢ã«ã§èª¬æããã¿ã¹ã¯ã¯ã次ã®ã¢ãã« ã¢ãŒããã¯ãã£ã§ãµããŒããããŠããŸãã
<!--This tip is automatically generated by `make fix-copies`, do not fill manually!-->
[Data2VecAudio](../model_doc/data2vec-audio), [Hubert](../model_doc/hubert), [M-CTC-T](../model_doc/mctct), [SEW](../model_doc/sew), [SEW-D](../model_doc/sew-d), [UniSpeech](../model_doc/unispeech), [UniSpeechSat](../model_doc/unispeech-sat), [Wav2Vec2](../model_doc/wav2vec2), [Wav2Vec2-Conformer](../model_doc/wav2vec2-conformer), [WavLM](../model_doc/wavlm)
<!--End of the generated tip-->
</Tip>
å§ããåã«ãå¿
èŠãªã©ã€ãã©ãªããã¹ãŠã€ã³ã¹ããŒã«ãããŠããããšã確èªããŠãã ããã
```bash
pip install transformers datasets evaluate jiwer
```
ã¢ãã«ãã¢ããããŒãããŠã³ãã¥ããã£ãšå
±æã§ããããã«ãHugging Face ã¢ã«ãŠã³ãã«ãã°ã€ã³ããããšããå§ãããŸããããã³ããã衚瀺ãããããããŒã¯ã³ãå
¥åããŠãã°ã€ã³ããŸãã
```py
>>> from huggingface_hub import notebook_login
>>> notebook_login()
```
## Load MInDS-14 dataset
ãŸããð€ ããŒã¿ã»ãã ã©ã€ãã©ãªãã [MInDS-14](https://huggingface.co/datasets/PolyAI/minds14) ããŒã¿ã»ããã®å°ãããµãã»ãããããŒãããŸããããã«ãããå®å
šãªããŒã¿ã»ããã®ãã¬ãŒãã³ã°ã«ããã«æéãè²»ããåã«ãå®éšããŠãã¹ãŠãæ©èœããããšã確èªããæ©äŒãåŸãããŸãã
```py
>>> from datasets import load_dataset, Audio
>>> minds = load_dataset("PolyAI/minds14", name="en-US", split="train[:100]")
```
[`~Dataset.train_test_split`] ã¡ãœããã䜿çšããŠãããŒã¿ã»ããã® `train` åå²ããã¬ã€ã³ ã»ãããšãã¹ã ã»ããã«åå²ããŸãã
```py
>>> minds = minds.train_test_split(test_size=0.2)
```
次ã«ãããŒã¿ã»ãããèŠãŠã¿ãŸãããã
```py
>>> minds
DatasetDict({
train: Dataset({
features: ['path', 'audio', 'transcription', 'english_transcription', 'intent_class', 'lang_id'],
num_rows: 16
})
test: Dataset({
features: ['path', 'audio', 'transcription', 'english_transcription', 'intent_class', 'lang_id'],
num_rows: 4
})
})
```
ããŒã¿ã»ããã«ã¯`lang_id`ã`english_transcription`ãªã©ã®å€ãã®æçšãªæ
å ±ãå«ãŸããŠããŸããããã®ã¬ã€ãã§ã¯ã`audio`ããšã`transciption`ãã«çŠç¹ãåœãŠãŸãã [`~datasets.Dataset.remove_columns`] ã¡ãœããã䜿çšããŠä»ã®åãåé€ããŸãã
```py
>>> minds = minds.remove_columns(["english_transcription", "intent_class", "lang_id"])
```
ããäžåºŠäŸãèŠãŠã¿ãŸãããã
```py
>>> minds["train"][0]
{'audio': {'array': array([-0.00024414, 0. , 0. , ..., 0.00024414,
0.00024414, 0.00024414], dtype=float32),
'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~APP_ERROR/602ba9e2963e11ccd901cd4f.wav',
'sampling_rate': 8000},
'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~APP_ERROR/602ba9e2963e11ccd901cd4f.wav',
'transcription': "hi I'm trying to use the banking app on my phone and currently my checking and savings account balance is not refreshing"}
```
次㮠2 ã€ã®ãã£ãŒã«ãããããŸãã
- `audio`: é³å£°ãã¡ã€ã«ãããŒãããŠãªãµã³ããªã³ã°ããããã«åŒã³åºãå¿
èŠãããé³å£°ä¿¡å·ã® 1 次å
ã® `array`ã
- `transcription`: ã¿ãŒã²ããããã¹ãã
## Preprocess
次ã®ã¹ãããã§ã¯ãWav2Vec2 ããã»ããµãããŒãããŠãªãŒãã£ãªä¿¡å·ãåŠçããŸãã
```py
>>> from transformers import AutoProcessor
>>> processor = AutoProcessor.from_pretrained("facebook/wav2vec2-base")
```
MInDS-14 ããŒã¿ã»ããã®ãµã³ããªã³ã° ã¬ãŒã㯠8000kHz ã§ã (ãã®æ
å ±ã¯ [ããŒã¿ã»ãã ã«ãŒã](https://huggingface.co/datasets/PolyAI/minds14) ã§ç¢ºèªã§ããŸã)ãã€ãŸããããŒã¿ã»ãããåãµã³ããªã³ã°ããå¿
èŠããããŸããäºåãã¬ãŒãã³ã°ããã Wav2Vec2 ã¢ãã«ã䜿çšããã«ã¯ã16000kHz ã«èšå®ããŸãã
```py
>>> minds = minds.cast_column("audio", Audio(sampling_rate=16_000))
>>> minds["train"][0]
{'audio': {'array': array([-2.38064706e-04, -1.58618059e-04, -5.43987835e-06, ...,
2.78103951e-04, 2.38446111e-04, 1.18740834e-04], dtype=float32),
'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~APP_ERROR/602ba9e2963e11ccd901cd4f.wav',
'sampling_rate': 16000},
'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~APP_ERROR/602ba9e2963e11ccd901cd4f.wav',
'transcription': "hi I'm trying to use the banking app on my phone and currently my checking and savings account balance is not refreshing"}
```
äžã® `transcription` ã§ãããããã«ãããã¹ãã«ã¯å€§æåãšå°æåãæ··åšããŠããŸãã Wav2Vec2 ããŒã¯ãã€ã¶ãŒã¯å€§æåã®ã¿ã§ãã¬ãŒãã³ã°ããããããããã¹ããããŒã¯ãã€ã¶ãŒã®èªåœãšäžèŽããããšã確èªããå¿
èŠããããŸãã
```py
>>> def uppercase(example):
... return {"transcription": example["transcription"].upper()}
>>> minds = minds.map(uppercase)
```
次ã«ã次ã®ååŠç颿°ãäœæããŸãã
1. `audio`åãåŒã³åºããŠããªãŒãã£ãª ãã¡ã€ã«ãããŒãããŠãªãµã³ããªã³ã°ããŸãã
2. ãªãŒãã£ãª ãã¡ã€ã«ãã `input_values` ãæœåºããããã»ããµã䜿çšã㊠`transcription` åãããŒã¯ã³åããŸãã
```py
>>> def prepare_dataset(batch):
... audio = batch["audio"]
... batch = processor(audio["array"], sampling_rate=audio["sampling_rate"], text=batch["transcription"])
... batch["input_length"] = len(batch["input_values"][0])
... return batch
```
ããŒã¿ã»ããå
šäœã«ååŠç颿°ãé©çšããã«ã¯ãð€ Datasets [`~datasets.Dataset.map`] 颿°ã䜿çšããŸãã `num_proc` ãã©ã¡ãŒã¿ã䜿çšããŠããã»ã¹ã®æ°ãå¢ããããšã§ã`map` ãé«éåã§ããŸãã [`~datasets.Dataset.remove_columns`] ã¡ãœããã䜿çšããŠãäžèŠãªåãåé€ããŸãã
```py
>>> encoded_minds = minds.map(prepare_dataset, remove_columns=minds.column_names["train"], num_proc=4)
```
ð€ Transformers ã«ã¯ ASR çšã®ããŒã¿ç
§ååšããªãããã[`DataCollatââorWithPadding`] ã調æŽããŠãµã³ãã«ã®ããããäœæããå¿
èŠããããŸãããŸããããã¹ããšã©ãã«ã (ããŒã¿ã»ããå
šäœã§ã¯ãªã) ãããå
ã®æãé·ãèŠçŽ ã®é·ãã«åãããŠåçã«åã蟌ãŸããåäžãªé·ãã«ãªããŸãã `padding=True` ãèšå®ãããšã`tokenizer` 颿°ã§ããã¹ããåã蟌ãããšãã§ããŸãããåçãªåã蟌ã¿ã®æ¹ãå¹ççã§ãã
ä»ã®ããŒã¿ç
§ååšãšã¯ç°ãªãããã®ç¹å®ã®ããŒã¿ç
§ååšã¯ã`input_values`ãš `labels`ãã«ç°ãªãããã£ã³ã°æ¹æ³ãé©çšããå¿
èŠããããŸãã
```py
>>> import torch
>>> from dataclasses import dataclass, field
>>> from typing import Any, Dict, List, Optional, Union
>>> @dataclass
... class DataCollatorCTCWithPadding:
... processor: AutoProcessor
... padding: Union[bool, str] = "longest"
... def __call__(self, features: List[Dict[str, Union[List[int], torch.Tensor]]]) -> Dict[str, torch.Tensor]:
... # split inputs and labels since they have to be of different lengths and need
... # different padding methods
... input_features = [{"input_values": feature["input_values"][0]} for feature in features]
... label_features = [{"input_ids": feature["labels"]} for feature in features]
... batch = self.processor.pad(input_features, padding=self.padding, return_tensors="pt")
... labels_batch = self.processor.pad(labels=label_features, padding=self.padding, return_tensors="pt")
... # replace padding with -100 to ignore loss correctly
... labels = labels_batch["input_ids"].masked_fill(labels_batch.attention_mask.ne(1), -100)
... batch["labels"] = labels
... return batch
```
次ã«ã`DataCollatââorForCTCWithPadding` ãã€ã³ã¹ã¿ã³ã¹åããŸãã
```py
>>> data_collator = DataCollatorCTCWithPadding(processor=processor, padding="longest")
```
## Evaluate
ãã¬ãŒãã³ã°äžã«ã¡ããªã¯ã¹ãå«ãããšãå€ãã®å Žåãã¢ãã«ã®ããã©ãŒãã³ã¹ãè©äŸ¡ããã®ã«åœ¹ç«ã¡ãŸãã ð€ [Evaluate](https://huggingface.co/docs/evaluate/index) ã©ã€ãã©ãªã䜿çšããŠãè©äŸ¡ã¡ãœããããã°ããããŒãã§ããŸãããã®ã¿ã¹ã¯ã§ã¯ã[åèªãšã©ãŒç](https://huggingface.co/spaces/evaluate-metric/wer) (WER) ã¡ããªã¯ã¹ãèªã¿èŸŒã¿ãŸã (ð€ Evaluate [ã¯ã€ã㯠ãã¢ãŒ](https://huggingface.co/docs/evaluate/a_quick_tour) ãåç
§ããŠãã¡ããªã¯ã¹ãããŒãããŠèšç®ããæ¹æ³ã®è©³çްã確èªããŠãã ãã)ã
```py
>>> import evaluate
>>> wer = evaluate.load("wer")
```
次ã«ãäºæž¬ãšã©ãã«ã [`~evaluate.EvaluationModule.compute`] ã«æž¡ã㊠WER ãèšç®ãã颿°ãäœæããŸãã
```py
>>> import numpy as np
>>> def compute_metrics(pred):
... pred_logits = pred.predictions
... pred_ids = np.argmax(pred_logits, axis=-1)
... pred.label_ids[pred.label_ids == -100] = processor.tokenizer.pad_token_id
... pred_str = processor.batch_decode(pred_ids)
... label_str = processor.batch_decode(pred.label_ids, group_tokens=False)
... wer = wer.compute(predictions=pred_str, references=label_str)
... return {"wer": wer}
```
ããã§`compute_metrics`颿°ã®æºåãæŽããŸããããã¬ãŒãã³ã°ãã»ããã¢ãããããšãã«ãã®é¢æ°ã«æ»ããŸãã
## Train
<frameworkcontent>
<pt>
<Tip>
[`Trainer`] ã䜿çšããã¢ãã«ã®åŸ®èª¿æŽã«æ
£ããŠããªãå Žåã¯ã[ãã](../training#train-with-pytorch-trainer) ã®åºæ¬çãªãã¥ãŒããªã¢ã«ãã芧ãã ããã
</Tip>
ããã§ã¢ãã«ã®ãã¬ãŒãã³ã°ãéå§ããæºåãæŽããŸããã [`AutoModelForCTC`] ã§ Wav2Vec2 ãããŒãããŸãã `ctc_loss_reduction` ãã©ã¡ãŒã¿ã§é©çšããåæžãæå®ããŸããå€ãã®å Žåãããã©ã«ãã®åèšã§ã¯ãªãå¹³åã䜿çšããæ¹ãé©åã§ãã
```py
>>> from transformers import AutoModelForCTC, TrainingArguments, Trainer
>>> model = AutoModelForCTC.from_pretrained(
... "facebook/wav2vec2-base",
... ctc_loss_reduction="mean",
... pad_token_id=processor.tokenizer.pad_token_id,
... )
```
ãã®æç¹ã§æ®ã£ãŠããæé ã¯æ¬¡ã® 3 ã€ã ãã§ãã
1. [`TrainingArguments`] ã§ãã¬ãŒãã³ã° ãã€ããŒãã©ã¡ãŒã¿ãå®çŸ©ããŸããå¯äžã®å¿
é ãã©ã¡ãŒã¿ã¯ãã¢ãã«ã®ä¿åå Žæãæå®ãã `output_dir` ã§ãã `push_to_hub=True`ãèšå®ããŠããã®ã¢ãã«ãããã«ããã·ã¥ããŸã (ã¢ãã«ãã¢ããããŒãããã«ã¯ãHugging Face ã«ãµã€ã³ã€ã³ããå¿
èŠããããŸã)ãåãšããã¯ã®çµäºæã«ã[`ãã¬ãŒããŒ`] 㯠WER ãè©äŸ¡ãããã¬ãŒãã³ã° ãã§ãã¯ãã€ã³ããä¿åããŸãã
2. ãã¬ãŒãã³ã°åŒæ°ããã¢ãã«ãããŒã¿ã»ãããããŒã¯ãã€ã¶ãŒãããŒã¿ç
§ååšãããã³ `compute_metrics` 颿°ãšãšãã« [`Trainer`] ã«æž¡ããŸãã
3. [`~Trainer.train`] ãåŒã³åºããŠã¢ãã«ã埮調æŽããŸãã
```py
>>> training_args = TrainingArguments(
... output_dir="my_awesome_asr_mind_model",
... per_device_train_batch_size=8,
... gradient_accumulation_steps=2,
... learning_rate=1e-5,
... warmup_steps=500,
... max_steps=2000,
... gradient_checkpointing=True,
... fp16=True,
... group_by_length=True,
... evaluation_strategy="steps",
... per_device_eval_batch_size=8,
... save_steps=1000,
... eval_steps=1000,
... logging_steps=25,
... load_best_model_at_end=True,
... metric_for_best_model="wer",
... greater_is_better=False,
... push_to_hub=True,
... )
>>> trainer = Trainer(
... model=model,
... args=training_args,
... train_dataset=encoded_minds["train"],
... eval_dataset=encoded_minds["test"],
... tokenizer=processor,
... data_collator=data_collator,
... compute_metrics=compute_metrics,
... )
>>> trainer.train()
```
ãã¬ãŒãã³ã°ãå®äºãããã [`~transformers.Trainer.push_to_hub`] ã¡ãœããã䜿çšããŠã¢ãã«ãããã«å
±æãã誰ããã¢ãã«ã䜿çšã§ããããã«ããŸãã
```py
>>> trainer.push_to_hub()
```
</pt>
</frameworkcontent>
<Tip>
èªåé³å£°èªèçšã«ã¢ãã«ã埮調æŽããæ¹æ³ã®ãã詳现ãªäŸã«ã€ããŠã¯ãè±èª ASR ããã³è±èªã®ãã®ããã° [æçš¿](https://huggingface.co/blog/fine-tune-wav2vec2-english) ãåç
§ããŠãã ãããå€èšèª ASR ã«ã€ããŠã¯ããã® [æçš¿](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) ãåç
§ããŠãã ããã
</Tip>
## Inference
ã¢ãã«ã埮調æŽããã®ã§ããããæšè«ã«äœ¿çšã§ããããã«ãªããŸããã
æšè«ãå®è¡ãããé³å£°ãã¡ã€ã«ãããŒãããŸããå¿
èŠã«å¿ããŠããªãŒãã£ãª ãã¡ã€ã«ã®ãµã³ããªã³ã° ã¬ãŒããã¢ãã«ã®ãµã³ããªã³ã° ã¬ãŒããšäžèŽããããã«ãªãµã³ããªã³ã°ããããšãå¿ããªãã§ãã ããã
```py
>>> from datasets import load_dataset, Audio
>>> dataset = load_dataset("PolyAI/minds14", "en-US", split="train")
>>> dataset = dataset.cast_column("audio", Audio(sampling_rate=16000))
>>> sampling_rate = dataset.features["audio"].sampling_rate
>>> audio_file = dataset[0]["audio"]["path"]
```
æšè«çšã«åŸ®èª¿æŽãããã¢ãã«ã詊ãæãç°¡åãªæ¹æ³ã¯ãããã [`pipeline`] ã§äœ¿çšããããšã§ããã¢ãã«ã䜿çšããŠèªåé³å£°èªèçšã®`pipeline`ãã€ã³ã¹ã¿ã³ã¹åãããªãŒãã£ãª ãã¡ã€ã«ãããã«æž¡ããŸãã
```py
>>> from transformers import pipeline
>>> transcriber = pipeline("automatic-speech-recognition", model="stevhliu/my_awesome_asr_minds_model")
>>> transcriber(audio_file)
{'text': 'I WOUD LIKE O SET UP JOINT ACOUNT WTH Y PARTNER'}
```
<Tip>
転åã¯ãŸããŸãã§ããããã£ãšè¯ããªãå¯èœæ§ããããŸããããã«è¯ãçµæãåŸãã«ã¯ãããå€ãã®äŸã§ã¢ãã«ã埮調æŽããŠã¿ãŠãã ããã
</Tip>
å¿
èŠã«å¿ããŠãããã€ãã©ã€ã³ãã®çµæãæåã§è€è£œããããšãã§ããŸãã
<frameworkcontent>
<pt>
ããã»ããµãããŒãããŠãªãŒãã£ãª ãã¡ã€ã«ãšæåèµ·ãããååŠçãã`input`ã PyTorch ãã³ãœã«ãšããŠè¿ããŸãã
```py
>>> from transformers import AutoProcessor
>>> processor = AutoProcessor.from_pretrained("stevhliu/my_awesome_asr_mind_model")
>>> inputs = processor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt")
```
Pass your inputs to the model and return the logits:
```py
>>> from transformers import AutoModelForCTC
>>> model = AutoModelForCTC.from_pretrained("stevhliu/my_awesome_asr_mind_model")
>>> with torch.no_grad():
... logits = model(**inputs).logits
```
æãé«ã確çã§äºæž¬ããã `input_ids` ãååŸããããã»ããµã䜿çšããŠäºæž¬ããã `input_ids` ããã³ãŒãããŠããã¹ãã«æ»ããŸãã
```py
>>> import torch
>>> predicted_ids = torch.argmax(logits, dim=-1)
>>> transcription = processor.batch_decode(predicted_ids)
>>> transcription
['I WOUL LIKE O SET UP JOINT ACOUNT WTH Y PARTNER']
```
</pt>
</frameworkcontent>
| 0 |
hf_public_repos/transformers/docs/source/ja | hf_public_repos/transformers/docs/source/ja/tasks/language_modeling.md | <!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Causal language modeling
[[open-in-colab]]
èšèªã¢ããªã³ã°ã«ã¯ãå æçã¢ããªã³ã°ãšãã¹ã¯ãããèšèªã¢ããªã³ã°ã® 2 ã€ã®ã¿ã€ãããããŸãããã®ã¬ã€ãã§ã¯ãå æé¢ä¿ã®ããèšèªã¢ããªã³ã°ã«ã€ããŠèª¬æããŸãã
å æèšèªã¢ãã«ã¯ããã¹ãçæã«ãã䜿çšãããŸãããããã®ã¢ãã«ã¯ã次ã®ãããªã¯ãªãšã€ãã£ããªã¢ããªã±ãŒã·ã§ã³ã«äœ¿çšã§ããŸãã
ç¬èªã®ããã¹ã ã¢ããã³ãã£ãŒãéžæããããCopilot ã CodeParrot ãªã©ã®ã€ã³ããªãžã§ã³ããªã³ãŒãã£ã³ã° ã¢ã·ã¹ã¿ã³ããéžæããŸãã
<Youtube id="Vpjb1lu0MDk"/>
å æèšèªã¢ããªã³ã°ã¯ãäžé£ã®ããŒã¯ã³å
ã®æ¬¡ã®ããŒã¯ã³ãäºæž¬ããŸããã¢ãã«ã¯ã次ã®ããŒã¯ã³ã«ã®ã¿å¯Ÿå¿ã§ããŸãã
å·Šãããã¯ãã¢ãã«ãå°æ¥ã®ããŒã¯ã³ãèªèã§ããªãããšãæå³ããŸãã GPT-2 ã¯å æçèšèªã¢ãã«ã®äžäŸã§ãã
ãã®ã¬ã€ãã§ã¯ãæ¬¡ã®æ¹æ³ã説æããŸãã
1. [ELI5](https:/) ã® [r/askscience](https://www.reddit.com/r/askscience/) ãµãã»ããã§ [DistilGPT2](https://huggingface.co/distilgpt2) ã埮調æŽããŸãã /huggingface.co/datasets/eli5) ããŒã¿ã»ããã
2. 埮調æŽããã¢ãã«ãæšè«ã«äœ¿çšããŸãã
<Tip>
ãã®ã¬ã€ããšåãæé ã«åŸã£ãŠãå æèšèªã¢ããªã³ã°çšã«ä»ã®ã¢ãŒããã¯ãã£ã埮調æŽã§ããŸãã
次ã®ã¢ãŒããã¯ãã£ã®ãããããéžæããŸãã
<!--This tip is automatically generated by `make fix-copies`, do not fill manually!-->
[BART](../model_doc/bart), [BERT](../model_doc/bert), [Bert Generation](../model_doc/bert-generation), [BigBird](../model_doc/big_bird), [BigBird-Pegasus](../model_doc/bigbird_pegasus), [BioGpt](../model_doc/biogpt), [Blenderbot](../model_doc/blenderbot), [BlenderbotSmall](../model_doc/blenderbot-small), [BLOOM](../model_doc/bloom), [CamemBERT](../model_doc/camembert), [CodeLlama](../model_doc/code_llama), [CodeGen](../model_doc/codegen), [CPM-Ant](../model_doc/cpmant), [CTRL](../model_doc/ctrl), [Data2VecText](../model_doc/data2vec-text), [ELECTRA](../model_doc/electra), [ERNIE](../model_doc/ernie), [Falcon](../model_doc/falcon), [Fuyu](../model_doc/fuyu), [GIT](../model_doc/git), [GPT-Sw3](../model_doc/gpt-sw3), [OpenAI GPT-2](../model_doc/gpt2), [GPTBigCode](../model_doc/gpt_bigcode), [GPT Neo](../model_doc/gpt_neo), [GPT NeoX](../model_doc/gpt_neox), [GPT NeoX Japanese](../model_doc/gpt_neox_japanese), [GPT-J](../model_doc/gptj), [LLaMA](../model_doc/llama), [Marian](../model_doc/marian), [mBART](../model_doc/mbart), [MEGA](../model_doc/mega), [Megatron-BERT](../model_doc/megatron-bert), [Mistral](../model_doc/mistral), [MPT](../model_doc/mpt), [MusicGen](../model_doc/musicgen), [MVP](../model_doc/mvp), [OpenLlama](../model_doc/open-llama), [OpenAI GPT](../model_doc/openai-gpt), [OPT](../model_doc/opt), [Pegasus](../model_doc/pegasus), [Persimmon](../model_doc/persimmon), [PLBart](../model_doc/plbart), [ProphetNet](../model_doc/prophetnet), [QDQBert](../model_doc/qdqbert), [Reformer](../model_doc/reformer), [RemBERT](../model_doc/rembert), [RoBERTa](../model_doc/roberta), [RoBERTa-PreLayerNorm](../model_doc/roberta-prelayernorm), [RoCBert](../model_doc/roc_bert), [RoFormer](../model_doc/roformer), [RWKV](../model_doc/rwkv), [Speech2Text2](../model_doc/speech_to_text_2), [Transformer-XL](../model_doc/transfo-xl), [TrOCR](../model_doc/trocr), [XGLM](../model_doc/xglm), [XLM](../model_doc/xlm), [XLM-ProphetNet](../model_doc/xlm-prophetnet), [XLM-RoBERTa](../model_doc/xlm-roberta), [XLM-RoBERTa-XL](../model_doc/xlm-roberta-xl), [XLNet](../model_doc/xlnet), [X-MOD](../model_doc/xmod)
<!--End of the generated tip-->
</Tip>
å§ããåã«ãå¿
èŠãªã©ã€ãã©ãªããã¹ãŠã€ã³ã¹ããŒã«ãããŠããããšã確èªããŠãã ããã
```bash
pip install transformers datasets evaluate
```
ã¢ãã«ãã¢ããããŒãããŠã³ãã¥ããã£ãšå
±æã§ããããã«ãHugging Face ã¢ã«ãŠã³ãã«ãã°ã€ã³ããããšããå§ãããŸããããã³ããã衚瀺ãããããããŒã¯ã³ãå
¥åããŠãã°ã€ã³ããŸãã
```py
>>> from huggingface_hub import notebook_login
>>> notebook_login()
```
## Load ELI5 dataset
ãŸããELI5 ããŒã¿ã»ããã® r/askscience ãµãã»ããã®å°ãããµãã»ããã ð€ ããŒã¿ã»ãã ã©ã€ãã©ãªããããŒãããŸãã
ããã«ãããå®å
šãªããŒã¿ã»ããã®ãã¬ãŒãã³ã°ã«ããã«æéãè²»ããåã«ãå®éšããŠãã¹ãŠãæ©èœããããšã確èªããæ©äŒãåŸãããŸãã
```py
>>> from datasets import load_dataset
>>> eli5 = load_dataset("eli5", split="train_asks[:5000]")
```
[`~datasets.Dataset.train_test_split`] ã¡ãœããã䜿çšããŠãããŒã¿ã»ããã® `train_asks` ããã¬ã€ã³ ã»ãããšãã¹ã ã»ããã«åå²ããŸãã
```py
>>> eli5 = eli5.train_test_split(test_size=0.2)
```
次ã«ãäŸãèŠãŠã¿ãŸãããã
```py
>>> eli5["train"][0]
{'answers': {'a_id': ['c3d1aib', 'c3d4lya'],
'score': [6, 3],
'text': ["The velocity needed to remain in orbit is equal to the square root of Newton's constant times the mass of earth divided by the distance from the center of the earth. I don't know the altitude of that specific mission, but they're usually around 300 km. That means he's going 7-8 km/s.\n\nIn space there are no other forces acting on either the shuttle or the guy, so they stay in the same position relative to each other. If he were to become unable to return to the ship, he would presumably run out of oxygen, or slowly fall into the atmosphere and burn up.",
"Hope you don't mind me asking another question, but why aren't there any stars visible in this photo?"]},
'answers_urls': {'url': []},
'document': '',
'q_id': 'nyxfp',
'selftext': '_URL_0_\n\nThis was on the front page earlier and I have a few questions about it. Is it possible to calculate how fast the astronaut would be orbiting the earth? Also how does he stay close to the shuttle so that he can return safely, i.e is he orbiting at the same speed and can therefore stay next to it? And finally if his propulsion system failed, would he eventually re-enter the atmosphere and presumably die?',
'selftext_urls': {'url': ['http://apod.nasa.gov/apod/image/1201/freeflyer_nasa_3000.jpg']},
'subreddit': 'askscience',
'title': 'Few questions about this space walk photograph.',
'title_urls': {'url': []}}
```
ããã¯å€ãã®ããšã®ããã«èŠãããããããŸããããå®éã«é¢å¿ãããã®ã¯`text`ãã£ãŒã«ãã ãã§ããèšèªã¢ããªã³ã°ã®åªããŠããç¹
ã¿ã¹ã¯ã§ã¯ã次ã®åèªãã©ãã« * ã§ãããããã©ãã« (æåž«ãªãã¿ã¹ã¯ãšãåŒã°ããŸã) ã¯å¿
èŠãããŸããã
## Preprocess
<Youtube id="ma1TrR7gE7I"/>
次ã®ã¹ãããã¯ã`text`ãµããã£ãŒã«ããåŠçããããã« DistilGPT2 ããŒã¯ãã€ã¶ãŒãããŒãããããšã§ãã
```py
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("distilgpt2")
```
äžã®äŸãããããããã«ã`text`ãã£ãŒã«ãã¯å®éã«ã¯`answers`å
ã«ãã¹ããããŠããŸããã€ãŸããæ¬¡ã®ããšãå¿
èŠã«ãªããŸãã
[` flatten`](https://huggingface.co/docs/datasets/process.html#flatten) ã¡ãœããã䜿çšããŠããã¹ããããæ§é ãã `text` ãµããã£ãŒã«ããæœåºããŸãã
```py
>>> eli5 = eli5.flatten()
>>> eli5["train"][0]
{'answers.a_id': ['c3d1aib', 'c3d4lya'],
'answers.score': [6, 3],
'answers.text': ["The velocity needed to remain in orbit is equal to the square root of Newton's constant times the mass of earth divided by the distance from the center of the earth. I don't know the altitude of that specific mission, but they're usually around 300 km. That means he's going 7-8 km/s.\n\nIn space there are no other forces acting on either the shuttle or the guy, so they stay in the same position relative to each other. If he were to become unable to return to the ship, he would presumably run out of oxygen, or slowly fall into the atmosphere and burn up.",
"Hope you don't mind me asking another question, but why aren't there any stars visible in this photo?"],
'answers_urls.url': [],
'document': '',
'q_id': 'nyxfp',
'selftext': '_URL_0_\n\nThis was on the front page earlier and I have a few questions about it. Is it possible to calculate how fast the astronaut would be orbiting the earth? Also how does he stay close to the shuttle so that he can return safely, i.e is he orbiting at the same speed and can therefore stay next to it? And finally if his propulsion system failed, would he eventually re-enter the atmosphere and presumably die?',
'selftext_urls.url': ['http://apod.nasa.gov/apod/image/1201/freeflyer_nasa_3000.jpg'],
'subreddit': 'askscience',
'title': 'Few questions about this space walk photograph.',
'title_urls.url': []}
```
`answers`æ¥é èŸã§ç€ºãããããã«ãåãµããã£ãŒã«ãã¯åå¥ã®åã«ãªãã`text`ãã£ãŒã«ãã¯ãªã¹ãã«ãªããŸããããã®ä»£ãã
åæãåå¥ã«ããŒã¯ã³åããå Žåã¯ããªã¹ããæååã«å€æããŠããããããŸãšããŠããŒã¯ã³åã§ããããã«ããŸãã
以äžã¯ãåäŸã®æååã®ãªã¹ããçµåããçµæãããŒã¯ã³åããæåã®ååŠç颿°ã§ãã
```py
>>> def preprocess_function(examples):
... return tokenizer([" ".join(x) for x in examples["answers.text"]])
```
ãã®ååŠç颿°ãããŒã¿ã»ããå
šäœã«é©çšããã«ã¯ãð€ Datasets [`~datasets.Dataset.map`] ã¡ãœããã䜿çšããŸãã `map` 颿°ãé«éåããã«ã¯ã`batched=True` ãèšå®ããŠããŒã¿ã»ããã®è€æ°ã®èŠçŽ ãäžåºŠã«åŠçãã`num_proc` ã§ããã»ã¹ã®æ°ãå¢ãããŸããäžèŠãªåãåé€ããŸãã
```py
>>> tokenized_eli5 = eli5.map(
... preprocess_function,
... batched=True,
... num_proc=4,
... remove_columns=eli5["train"].column_names,
... )
```
ãã®ããŒã¿ã»ããã«ã¯ããŒã¯ã³ ã·ãŒã±ã³ã¹ãå«ãŸããŠããŸããããã®äžéšã¯ã¢ãã«ã®æå€§å
¥åé·ãããé·ããªããŸãã
2 çªç®ã®ååŠç颿°ã䜿çšããŠã
- ãã¹ãŠã®ã·ãŒã±ã³ã¹ãé£çµããŸã
- é£çµãããã·ãŒã±ã³ã¹ã`block_size`ã§å®çŸ©ãããçããã£ã³ã¯ã«åå²ããŸããããã¯ãæå€§å
¥åé·ããçããGPU RAM ã«ååãªé·ãã§ããå¿
èŠããããŸãã
```py
>>> block_size = 128
>>> def group_texts(examples):
... # Concatenate all texts.
... concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()}
... total_length = len(concatenated_examples[list(examples.keys())[0]])
... # We drop the small remainder, we could add padding if the model supported it instead of this drop, you can
... # customize this part to your needs.
... if total_length >= block_size:
... total_length = (total_length // block_size) * block_size
... # Split by chunks of block_size.
... result = {
... k: [t[i : i + block_size] for i in range(0, total_length, block_size)]
... for k, t in concatenated_examples.items()
... }
... result["labels"] = result["input_ids"].copy()
... return result
```
Apply the `group_texts` function over the entire dataset:
```py
>>> lm_dataset = tokenized_eli5.map(group_texts, batched=True, num_proc=4)
```
次ã«ã[`DataCollatââorForLanguageModeling`] ã䜿çšããŠãµã³ãã«ã®ããããäœæããŸãã *åçã«ããã£ã³ã°*ããæ¹ãå¹ççã§ãã
ããŒã¿ã»ããå
šäœãæå€§é·ãŸã§ããã£ã³ã°ããã®ã§ã¯ãªããç
§åäžã«ãããå
ã®æãæé·ã®é·ãã«ããŸãã
<frameworkcontent>
<pt>
ã·ãŒã±ã³ã¹çµäºããŒã¯ã³ãããã£ã³ã° ããŒã¯ã³ãšããŠäœ¿çšãã`mlm=False` ãèšå®ããŸããããã¯ãå
¥åã 1 èŠçŽ åå³ã«ã·ããããã©ãã«ãšããŠäœ¿çšããŸãã
```py
>>> from transformers import DataCollatorForLanguageModeling
>>> tokenizer.pad_token = tokenizer.eos_token
>>> data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False)
```
</pt>
<tf>
ã·ãŒã±ã³ã¹çµäºããŒã¯ã³ãããã£ã³ã° ããŒã¯ã³ãšããŠäœ¿çšãã`mlm=False` ãèšå®ããŸããããã¯ãå
¥åã 1 èŠçŽ åå³ã«ã·ããããã©ãã«ãšããŠäœ¿çšããŸãã
```py
>>> from transformers import DataCollatorForLanguageModeling
>>> data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False, return_tensors="tf")
```
</tf>
</frameworkcontent>
## Train
<frameworkcontent>
<pt>
<Tip>
[`Trainer`] ã䜿çšããã¢ãã«ã®åŸ®èª¿æŽã«æ
£ããŠããªãå Žåã¯ã[åºæ¬ãã¥ãŒããªã¢ã«](../training#train-with-pytorch-trainer) ãåç
§ããŠãã ããã
</Tip>
ããã§ã¢ãã«ã®ãã¬ãŒãã³ã°ãéå§ããæºåãæŽããŸããã [`AutoModelForCausalLM`] ã䜿çšã㊠DistilGPT2 ãããŒãããŸãã
```py
>>> from transformers import AutoModelForCausalLM, TrainingArguments, Trainer
>>> model = AutoModelForCausalLM.from_pretrained("distilgpt2")
```
ãã®æç¹ã§æ®ã£ãŠããæé ã¯æ¬¡ã® 3 ã€ã ãã§ãã
1. [`TrainingArguments`] ã§ãã¬ãŒãã³ã° ãã€ããŒãã©ã¡ãŒã¿ãå®çŸ©ããŸããå¯äžã®å¿
é ãã©ã¡ãŒã¿ã¯ãã¢ãã«ã®ä¿åå Žæãæå®ãã `output_dir` ã§ãã `push_to_hub=True`ãèšå®ããŠããã®ã¢ãã«ãããã«ããã·ã¥ããŸã (ã¢ãã«ãã¢ããããŒãããã«ã¯ãHugging Face ã«ãµã€ã³ã€ã³ããå¿
èŠããããŸã)ã
2. ãã¬ãŒãã³ã°åŒæ°ãã¢ãã«ãããŒã¿ã»ãããããŒã¿ç
§ååšãšãšãã« [`Trainer`] ã«æž¡ããŸãã
3. [`~Trainer.train`] ãåŒã³åºããŠã¢ãã«ã埮調æŽããŸãã
```py
>>> training_args = TrainingArguments(
... output_dir="my_awesome_eli5_clm-model",
... evaluation_strategy="epoch",
... learning_rate=2e-5,
... weight_decay=0.01,
... push_to_hub=True,
... )
>>> trainer = Trainer(
... model=model,
... args=training_args,
... train_dataset=lm_dataset["train"],
... eval_dataset=lm_dataset["test"],
... data_collator=data_collator,
... )
>>> trainer.train()
```
ãã¬ãŒãã³ã°ãå®äºãããã [`~transformers.Trainer.evaluate`] ã¡ãœããã䜿çšããŠã¢ãã«ãè©äŸ¡ãããã®è€éããååŸããŸãã
```py
>>> import math
>>> eval_results = trainer.evaluate()
>>> print(f"Perplexity: {math.exp(eval_results['eval_loss']):.2f}")
Perplexity: 49.61
```
次ã«ã [`~transformers.Trainer.push_to_hub`] ã¡ãœããã䜿çšããŠã¢ãã«ãããã«å
±æãã誰ããã¢ãã«ã䜿çšã§ããããã«ããŸãã
```py
>>> trainer.push_to_hub()
```
</pt>
<tf>
<Tip>
Keras ã䜿çšããã¢ãã«ã®åŸ®èª¿æŽã«æ
£ããŠããªãå Žåã¯ã[åºæ¬ãã¥ãŒããªã¢ã«](../training#train-a-tensorflow-model-with-keras) ãã芧ãã ããã
</Tip>
TensorFlow ã§ã¢ãã«ã埮調æŽããã«ã¯ããªããã£ãã€ã¶ãŒé¢æ°ãåŠç¿çã¹ã±ãžã¥ãŒã«ãããã³ããã€ãã®ãã¬ãŒãã³ã° ãã€ããŒãã©ã¡ãŒã¿ãŒãã»ããã¢ããããããšããå§ããŸãã
```py
>>> from transformers import create_optimizer, AdamWeightDecay
>>> optimizer = AdamWeightDecay(learning_rate=2e-5, weight_decay_rate=0.01)
```
次ã«ã[`TFAutoModelForCausalLM`] ã䜿çšã㊠DistilGPT2 ãããŒãã§ããŸãã
```py
>>> from transformers import TFAutoModelForCausalLM
>>> model = TFAutoModelForCausalLM.from_pretrained("distilgpt2")
```
[`~transformers.TFPreTrainedModel.prepare_tf_dataset`] ã䜿çšããŠãããŒã¿ã»ããã `tf.data.Dataset` 圢åŒã«å€æããŸãã
```py
>>> tf_train_set = model.prepare_tf_dataset(
... lm_dataset["train"],
... shuffle=True,
... batch_size=16,
... collate_fn=data_collator,
... )
>>> tf_test_set = model.prepare_tf_dataset(
... lm_dataset["test"],
... shuffle=False,
... batch_size=16,
... collate_fn=data_collator,
... )
```
[`compile`](https://keras.io/api/models/model_training_apis/#compile-method) ã䜿çšããŠãã¬ãŒãã³ã°çšã®ã¢ãã«ãèšå®ããŸãã Transformers ã¢ãã«ã«ã¯ãã¹ãŠããã©ã«ãã®ã¿ã¹ã¯é¢é£ã®æå€±é¢æ°ããããããæ¬¡ã®å Žåãé€ããæå€±é¢æ°ãæå®ããå¿
èŠã¯ãªãããšã«æ³šæããŠãã ããã
```py
>>> import tensorflow as tf
>>> model.compile(optimizer=optimizer) # No loss argument!
```
ããã¯ãã¢ãã«ãšããŒã¯ãã€ã¶ãŒã [`~transformers.PushToHubCallback`] ã§ããã·ã¥ããå Žæãæå®ããããšã§å®è¡ã§ããŸãã
```py
>>> from transformers.keras_callbacks import PushToHubCallback
>>> callback = PushToHubCallback(
... output_dir="my_awesome_eli5_clm-model",
... tokenizer=tokenizer,
... )
```
ã€ãã«ãã¢ãã«ã®ãã¬ãŒãã³ã°ãéå§ããæºåãæŽããŸããããã¬ãŒãã³ã°ããã³æ€èšŒããŒã¿ã»ããããšããã¯æ°ãã³ãŒã«ããã¯ãæå®ã㊠[`fit`](https://keras.io/api/models/model_training_apis/#fit-method) ãåŒã³åºããã¢ãã«ã埮調æŽããŸãã
```py
>>> model.fit(x=tf_train_set, validation_data=tf_test_set, epochs=3, callbacks=[callback])
```
ãã¬ãŒãã³ã°ãå®äºãããšãã¢ãã«ã¯èªåçã«ããã«ã¢ããããŒãããã誰ã§ã䜿çšã§ããããã«ãªããŸãã
</tf>
</frameworkcontent>
<Tip>
å æèšèªã¢ããªã³ã°çšã«ã¢ãã«ã埮調æŽããæ¹æ³ã®ãã詳现ãªäŸã«ã€ããŠã¯ã察å¿ããããã¥ã¡ã³ããåç
§ããŠãã ããã
[PyTorch ããŒãããã¯](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb)
ãŸã㯠[TensorFlow ããŒãããã¯](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb)ã
</Tip>
## Inference
ã¢ãã«ã埮調æŽããã®ã§ããããæšè«ã«äœ¿çšã§ããããã«ãªããŸããã
ããã¹ããçæããããã³ãããèãåºããŸãã
```py
>>> prompt = "Somatic hypermutation allows the immune system to"
```
æšè«çšã«åŸ®èª¿æŽãããã¢ãã«ã詊ãæãç°¡åãªæ¹æ³ã¯ãããã [`pipeline`] ã§äœ¿çšããããšã§ããã¢ãã«ã䜿çšããŠããã¹ãçæçšã®`pipeline`ãã€ã³ã¹ã¿ã³ã¹åããããã«ããã¹ããæž¡ããŸãã
```py
>>> from transformers import pipeline
>>> generator = pipeline("text-generation", model="my_awesome_eli5_clm-model")
>>> generator(prompt)
[{'generated_text': "Somatic hypermutation allows the immune system to be able to effectively reverse the damage caused by an infection.\n\n\nThe damage caused by an infection is caused by the immune system's ability to perform its own self-correcting tasks."}]
```
<frameworkcontent>
<pt>
ããã¹ããããŒã¯ã³åãããinput_idsãã PyTorch ãã³ãœã«ãšããŠè¿ããŸãã
```py
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("my_awesome_eli5_clm-model")
>>> inputs = tokenizer(prompt, return_tensors="pt").input_ids
```
[`~transformers.generation_utils.GenerationMixin.generate`] ã¡ãœããã䜿çšããŠããã¹ããçæããŸãã
ããŸããŸãªããã¹ãçææŠç¥ãšçæãå¶åŸ¡ããããã®ãã©ã¡ãŒã¿ãŒã®è©³çްã«ã€ããŠã¯ã[ããã¹ãçææŠç¥](../generation_strategies) ããŒãžãåç
§ããŠãã ããã
```py
>>> from transformers import AutoModelForCausalLM
>>> model = AutoModelForCausalLM.from_pretrained("my_awesome_eli5_clm-model")
>>> outputs = model.generate(inputs, max_new_tokens=100, do_sample=True, top_k=50, top_p=0.95)
```
çæãããããŒã¯ã³ ID ããã³ãŒãããŠããã¹ãã«æ»ããŸãã
```py
>>> tokenizer.batch_decode(outputs, skip_special_tokens=True)
["Somatic hypermutation allows the immune system to react to drugs with the ability to adapt to a different environmental situation. In other words, a system of 'hypermutation' can help the immune system to adapt to a different environmental situation or in some cases even a single life. In contrast, researchers at the University of Massachusetts-Boston have found that 'hypermutation' is much stronger in mice than in humans but can be found in humans, and that it's not completely unknown to the immune system. A study on how the immune system"]
```
</pt>
<tf>
ããã¹ããããŒã¯ã³åãã`input_ids`ã TensorFlow ãã³ãœã«ãšããŠè¿ããŸãã
```py
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("my_awesome_eli5_clm-model")
>>> inputs = tokenizer(prompt, return_tensors="tf").input_ids
```
[`~transformers.generation_tf_utils.TFGenerationMixin.generate`] ã¡ãœããã䜿çšããŠèŠçŽãäœæããŸããããŸããŸãªããã¹ãçææŠç¥ãšçæãå¶åŸ¡ããããã®ãã©ã¡ãŒã¿ãŒã®è©³çްã«ã€ããŠã¯ã[ããã¹ãçææŠç¥](../generation_strategies) ããŒãžãåç
§ããŠãã ããã
```py
>>> from transformers import TFAutoModelForCausalLM
>>> model = TFAutoModelForCausalLM.from_pretrained("my_awesome_eli5_clm-model")
>>> outputs = model.generate(input_ids=inputs, max_new_tokens=100, do_sample=True, top_k=50, top_p=0.95)
```
çæãããããŒã¯ã³ ID ããã³ãŒãããŠããã¹ãã«æ»ããŸãã
```py
>>> tokenizer.batch_decode(outputs, skip_special_tokens=True)
['Somatic hypermutation allows the immune system to detect the presence of other viruses as they become more prevalent. Therefore, researchers have identified a high proportion of human viruses. The proportion of virus-associated viruses in our study increases with age. Therefore, we propose a simple algorithm to detect the presence of these new viruses in our samples as a sign of improved immunity. A first study based on this algorithm, which will be published in Science on Friday, aims to show that this finding could translate into the development of a better vaccine that is more effective for']
```
</tf>
</frameworkcontent>
| 0 |
hf_public_repos/transformers/docs/source/ja | hf_public_repos/transformers/docs/source/ja/tasks/prompting.md | <!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# LLM prompting guide
[[open-in-colab]]
FalconãLLaMA ãªã©ã®å€§èŠæš¡èšèªã¢ãã«ã¯ãäºåã«ãã¬ãŒãã³ã°ããããã©ã³ã¹ãã©ãŒã㌠ã¢ãã«ã§ãããæåã¯äºæž¬ããããã«ãã¬ãŒãã³ã°ãããŠããŸãã
å
¥åããã¹ããäžããããå Žåã®æ¬¡ã®ããŒã¯ã³ãéåžžãæ°ååã®ãã©ã¡ãŒã¿ããããäœå
ãã®ãã©ã¡ãŒã¿ã§ãã¬ãŒãã³ã°ãããŠããŸãã
é·æéã®ããŒã¯ã³ããã®çµæããããã®ã¢ãã«ã¯éåžžã«åŒ·åã§å€çšéã«ãªããæ¬¡ã®ãããªããšãå¯èœã«ãªããŸãã
èªç¶èšèªããã³ããã§ã¢ãã«ã«æç€ºããããšã§ãããã«è€æ°ã® NLP ã¿ã¹ã¯ã解決ã§ããŸãã
æé©ãªåºåãä¿èšŒããããã«ãã®ãããªããã³ãããèšèšããããšã¯ãå€ãã®å Žåãããã³ãã ãšã³ãžãã¢ãªã³ã°ããšåŒã°ããŸããããã³ãããšã³ãžãã¢ãªã³ã°ãšã¯ã
ããªãã®éã®å®éšãå¿
èŠãšããå埩ããã»ã¹ãèªç¶èšèªã¯ã¯ããã«æè»ã§è¡šçŸåè±ãã§ã
ãã ããããã°ã©ãã³ã°èšèªããããããŸãããçããå¯èœæ§ããããŸããåæã«ãèªç¶èšèªã«ããããã³ãã
å€åã«ã¯ããªãææã§ããããã³ããã«ããããªå€æŽãå ããã ãã§ããåºåã倧å¹
ã«ç°ãªãå ŽåããããŸãã
ãã¹ãŠã®ã±ãŒã¹ã«é©åããããã³ãããäœæããããã®æ£ç¢ºãªã¬ã·ãã¯ãããŸããããç ç©¶è
ã¯ããã€ãã®æè¯ã®ã¬ã·ããèæ¡ããŸããã
æé©ãªçµæãããäžè²«ããŠéæããã®ã«åœ¹ç«ã€å®è·µã
ãã®ã¬ã€ãã§ã¯ãããåªãã LLM ããã³ãããäœæããããŸããŸãª NLP ã¿ã¹ã¯ã解決ããã®ã«åœ¹ç«ã€ããã³ãã ãšã³ãžãã¢ãªã³ã°ã®ãã¹ã ãã©ã¯ãã£ã¹ã«ã€ããŠèª¬æããŸãã
次ã®ããšãåŠã³ãŸã:
- [ããã³ããã®åºæ¬](#basic-prompts)
- [LLM ããã³ããã®ãã¹ã ãã©ã¯ãã£ã¹](#best-practices-of-llm-prompting)
- [é«åºŠãªããã³ãã ãã¯ããã¯: æ°åã®ããã³ãããšæèã®é£é](#advanced-prompting-techniques)
- [ããã³ããã衚瀺ãã代ããã«åŸ®èª¿æŽããå Žå](#prompting-vs-fine-tuning)
<Tip>
è¿
éãªãšã³ãžãã¢ãªã³ã°ã¯ãLLM åºåæé©åããã»ã¹ã®äžéšã«ãããŸããããã 1 ã€ã®éèŠãªèŠçŽ ã¯ã
æé©ãªããã¹ãçææŠç¥ã LLM ãçææã«åŸç¶ã®åããŒã¯ã³ãéžæããæ¹æ³ãã«ã¹ã¿ãã€ãºã§ããŸãã
ãã¬ãŒãã³ã°å¯èœãªãã©ã¡ãŒã¿ãäžå倿Žããã«ããã¹ããäœæããŸããããã¹ãçæãã©ã¡ãŒã¿ã埮調æŽããããšã§ã
çæãããããã¹ãã«ç¹°ãè¿ããå«ãŸããŠãããããããäžè²«æ§ããã人éãããé¿ãã«ãªããŸãã
ããã¹ãçææŠç¥ãšãã©ã¡ãŒã¿ãŒã¯ãã®ã¬ã€ãã®ç¯å²å€ã§ããããããã®ãããã¯ã«ã€ããŠè©³ããã¯ã次ã®ãããã¯ãåç
§ããŠãã ããã
次ã®ã¬ã€ã:
* [LLM ã«ããçæ](../llm_tutorial)
* [ããã¹ãçææŠç¥](../generation_strategies)
</Tip>
## Basics of prompting
### Types of models
ææ°ã® LLM ã®å€§éšåã¯ããã³ãŒãå°çšã®ãã©ã³ã¹ãã©ãŒããŒã§ããäŸãšããŠã¯ã[LLaMA](../model_doc/llama)ã
[Llama2](../model_doc/llama2)ã[Falcon](../model_doc/falcon)ã[GPT2](../model_doc/gpt2)ããã ããééããå¯èœæ§ããããŸã
ãšã³ã³ãŒã ãã³ãŒã ãã©ã³ã¹ãã©ãŒã LLM ãåæ§ã§ããããšãã°ã[Flan-T5](../model_doc/flan-t5) ã [BART](../model_doc/bart) ã§ãã
ãšã³ã³ãŒã ãã³ãŒã ã¹ã¿ã€ã«ã®ã¢ãã«ã¯éåžžãåºåãå
¥åã«**倧ãã**äŸåããçæã¿ã¹ã¯ã§äœ¿çšãããŸãã
ããšãã°ã翻蚳ãšèŠçŽã§ãããã³ãŒãå°çšã¢ãã«ã¯ãä»ã®ãã¹ãŠã®ã¿ã€ãã®çæã¿ã¹ã¯ã«äœ¿çšãããŸãã
ãã€ãã©ã€ã³ã䜿çšã㊠LLM ã§ããã¹ããçæããå Žåã䜿çšããŠãã LLM ã®ã¿ã€ããç¥ãããšãéèŠã§ãã
ç°ãªããã€ãã©ã€ã³ã䜿çšããŸãã
`text-generation`ãã€ãã©ã€ã³ã䜿çšããŠãã³ãŒãã®ã¿ã®ã¢ãã«ã§æšè«ãå®è¡ããŸãã
```python
>>> from transformers import pipeline
>>> import torch
>>> torch.manual_seed(0) # doctest: +IGNORE_RESULT
>>> generator = pipeline('text-generation', model = 'gpt2')
>>> prompt = "Hello, I'm a language model"
>>> generator(prompt, max_length = 30)
[{'generated_text': "Hello, I'm a language model expert, so I'm a big believer in the concept that I know very well and then I try to look into"}]
```
ãšã³ã³ãŒããŒ/ãã³ãŒããŒã䜿çšããŠæšè«ãå®è¡ããã«ã¯ã`text2text-generation` ãã€ãã©ã€ã³ã䜿çšããŸãã
```python
>>> text2text_generator = pipeline("text2text-generation", model = 'google/flan-t5-base')
>>> prompt = "Translate from English to French: I'm very happy to see you"
>>> text2text_generator(prompt)
[{'generated_text': 'Je suis trÚs heureuse de vous rencontrer.'}]
```
### Base vs instruct/chat models
ð€ Hub ã§å©çšã§ããæè¿ã® LLM ãã§ãã¯ãã€ã³ãã®ã»ãšãã©ã«ã¯ãbase ãš instruct (ãŸã㯠chat) ã® 2 ã€ã®ããŒãžã§ã³ããããŸããäŸãã°ã
[`tiiuae/falcon-7b`](https://huggingface.co/tiiuae/falcon-7b) ããã³ [`tiiuae/falcon-7b-instruct`](https://huggingface.co/tiiuae/falcon-7b) -æç€ºãã)ã
åºæ¬ã¢ãã«ã¯ãæåã®ããã³ãããäžãããããšãã«ããã¹ãã宿ãããã®ã«ã¯åªããŠããŸãããNLP ã¿ã¹ã¯ã«ã¯çæ³çã§ã¯ãããŸããã
æç€ºã«åŸãå¿
èŠãããå ŽåããŸãã¯äŒè©±ã§äœ¿çšããå Žåã«äœ¿çšããŸããããã§ãæç€º (ãã£ãã) ããŒãžã§ã³ãç»å ŽããŸãã
ãããã®ãã§ãã¯ãã€ã³ãã¯ãåœä»€ãšäŒè©±ããŒã¿ã«åºã¥ããŠäºåãã¬ãŒãã³ã°ãããããŒã¹ ããŒãžã§ã³ãããã«åŸ®èª¿æŽããçµæã§ãã
ãã®è¿œå ã®åŸ®èª¿æŽã«ãããå€ãã® NLP ã¿ã¹ã¯ã«ãšã£ãŠããé©åãªéžæè¢ã«ãªããŸãã
[`tiiuae/falcon-7b-instruct`](https://huggingface.co/tiiuae/falcon-7b-instruct) ã§äœ¿çšã§ããããã€ãã®ç°¡åãªããã³ããã瀺ããŠã¿ãŸãããã
ããã€ãã®äžè¬ç㪠NLP ã¿ã¹ã¯ã解決ããŸãã
### NLP tasks
ãŸããç°å¢ãã»ããã¢ããããŸãããã
```bash
pip install -q transformers accelerate
```
次ã«ãé©åãªãã€ãã©ã€ã³ (`text_generation`) ã䜿çšããŠã¢ãã«ãããŒãããŸãããã
```python
>>> from transformers import pipeline, AutoTokenizer
>>> import torch
>>> torch.manual_seed(0) # doctest: +IGNORE_RESULT
>>> model = "tiiuae/falcon-7b-instruct"
>>> tokenizer = AutoTokenizer.from_pretrained(model)
>>> pipe = pipeline(
... "text-generation",
... model=model,
... tokenizer=tokenizer,
... torch_dtype=torch.bfloat16,
... device_map="auto",
... )
```
<Tip>
Falcon ã¢ãã«ã¯ `bfloat16` ããŒã¿åã䜿çšããŠãã¬ãŒãã³ã°ããããããåããã®ã䜿çšããããšããå§ãããŸããããã«ã¯ãæè¿ã®
CUDA ã®ããŒãžã§ã³ã«æºæ ããŠãããææ°ã®ã«ãŒãã§æé©ã«åäœããŸãã
</Tip>
ãã€ãã©ã€ã³çµç±ã§ã¢ãã«ãããŒãããã®ã§ãããã³ããã䜿çšã㊠NLP ã¿ã¹ã¯ã解決ããæ¹æ³ãèŠãŠã¿ãŸãããã
#### Text classification
ããã¹ãåé¡ã®æãäžè¬çãªåœ¢åŒã® 1 ã€ã¯ã»ã³ãã¡ã³ãåæã§ããããããžãã£ãããããã¬ãã£ãããããã¬ãã£ãããªã©ã®ã©ãã«ãå²ãåœãŠãŸãã
ãŸãã¯ãäžé£ã®ããã¹ãã«å¯ŸããŠãäžç«ãã§ããäžããããããã¹ã (æ ç»ã¬ãã¥ãŒ) ãåé¡ããããã«ã¢ãã«ã«æç€ºããããã³ãããäœæããŠã¿ãŸãããã
ãŸãæç€ºãäžããæ¬¡ã«åé¡ããããã¹ããæå®ããŸãããã®ãŸãŸã«ããŠããã®ã§ã¯ãªãã
å¿çã®å
é ã«ã远å ããŸã - `"Sentiment: "`:
```python
>>> torch.manual_seed(0) # doctest: +IGNORE_RESULT
>>> prompt = """Classify the text into neutral, negative or positive.
... Text: This movie is definitely one of my favorite movies of its kind. The interaction between respectable and morally strong characters is an ode to chivalry and the honor code amongst thieves and policemen.
... Sentiment:
... """
>>> sequences = pipe(
... prompt,
... max_new_tokens=10,
... )
>>> for seq in sequences:
... print(f"Result: {seq['generated_text']}")
Result: Classify the text into neutral, negative or positive.
Text: This movie is definitely one of my favorite movies of its kind. The interaction between respectable and morally strong characters is an ode to chivalry and the honor code amongst thieves and policemen.
Sentiment:
Positive
```
ãã®çµæãåºåã«ã¯ãæé ã§æäŸãããªã¹ãã®åé¡ã©ãã«ãå«ãŸããŠãããããã¯æ£ããã©ãã«ã§ãã
<Tip>
ããã³ããã«å ããŠã`max_new_tokens`ãã©ã¡ãŒã¿ãæž¡ããŠããããšã«æ°ã¥ããããããŸãããããŒã¯ã³ã®æ°ãå¶åŸ¡ããŸãã
ã¢ãã«ãçæããŸããããã¯ãåŠç¿ã§ããå€ãã®ããã¹ãçæãã©ã¡ãŒã¿ãŒã® 1 ã€ã§ãã
[ããã¹ãçææŠç¥](../generation_strategies) ã¬ã€ããåç
§ããŠãã ããã
</Tip>
#### Named Entity Recognition
åºæè¡šçŸèªè (NER) ã¯ãããã¹ãå
ã®äººç©ãå Žæãçµç¹ãªã©ã®åºæè¡šçŸãæ€çŽ¢ããã¿ã¹ã¯ã§ãã
ããã³ããã®æç€ºã倿ŽããŠãLLM ã«ãã®ã¿ã¹ã¯ãå®è¡ãããŸããããããã§ã¯`return_full_text = False`ãèšå®ããŸããã
åºåã«ããã³ãââããå«ââãŸããªãããã«ããŸãã
```python
>>> torch.manual_seed(1) # doctest: +IGNORE_RESULT
>>> prompt = """Return a list of named entities in the text.
... Text: The Golden State Warriors are an American professional basketball team based in San Francisco.
... Named entities:
... """
>>> sequences = pipe(
... prompt,
... max_new_tokens=15,
... return_full_text = False,
... )
>>> for seq in sequences:
... print(f"{seq['generated_text']}")
- Golden State Warriors
- San Francisco
```
ã芧ã®ãšãããã¢ãã«ã¯æå®ãããããã¹ããã 2 ã€ã®ååä»ããšã³ãã£ãã£ãæ£ããèå¥ããŸããã
#### Translation
LLM ãå®è¡ã§ãããã 1 ã€ã®ã¿ã¹ã¯ã¯ç¿»èš³ã§ãããã®ã¿ã¹ã¯ã«ã¯ãšã³ã³ãŒããŒ/ãã³ãŒã㌠ã¢ãã«ã䜿çšããããšãéžæã§ããŸãããããã§ã¯
äŸãç°¡åã«ããããã«ããã¡ããšããä»äºããã Falcon-7b-instruct ã䜿ãç¶ããŸããããäžåºŠãæ¹æ³ã¯æ¬¡ã®ãšããã§ã
ããã¹ãã®äžéšãè±èªããã€ã¿ãªã¢èªã«ç¿»èš³ããããã«ã¢ãã«ã«æç€ºããåºæ¬çãªããã³ãããäœæã§ããŸãã
```python
>>> torch.manual_seed(2) # doctest: +IGNORE_RESULT
>>> prompt = """Translate the English text to Italian.
... Text: Sometimes, I've believed as many as six impossible things before breakfast.
... Translation:
... """
>>> sequences = pipe(
... prompt,
... max_new_tokens=20,
... do_sample=True,
... top_k=10,
... return_full_text = False,
... )
>>> for seq in sequences:
... print(f"{seq['generated_text']}")
A volte, ho creduto a sei impossibili cose prima di colazione.
```
ããã§ã¯ãåºåçææã«ã¢ãã«ãããå°ãæè»ã«ãªãããã« `do_sample=True` ãš `top_k=10` ã远å ããŸããã
#### Text summarization
翻蚳ãšåæ§ã«ãããã¹ãã®èŠçŽããåºåãå
¥åã«**倧ãã**äŸåããçæã¿ã¹ã¯ã§ãã
ãšã³ã³ãŒã/ãã³ãŒã ã¢ãã«ã®æ¹ãè¯ãéžæã«ãªãå¯èœæ§ããããŸãããã ãããã³ãŒã ã¹ã¿ã€ã«ã®ã¢ãã«ããã®ã¿ã¹ã¯ã«äœ¿çšã§ããŸãã
以åã¯ãããã³ããã®å
é ã«æç€ºãé
眮ããŠããŸããããã ããããã³ããã®æåŸã§ã
æç€ºãäžããã®ã«é©ããå Žæã§ããããŸããéåžžãåœä»€ã¯ã©ã¡ããã®ç«¯ã«é
眮ããããšããå§ãããŸãã
```python
>>> torch.manual_seed(3) # doctest: +IGNORE_RESULT
>>> prompt = """Permaculture is a design process mimicking the diversity, functionality and resilience of natural ecosystems. The principles and practices are drawn from traditional ecological knowledge of indigenous cultures combined with modern scientific understanding and technological innovations. Permaculture design provides a framework helping individuals and communities develop innovative, creative and effective strategies for meeting basic needs while preparing for and mitigating the projected impacts of climate change.
... Write a summary of the above text.
... Summary:
... """
>>> sequences = pipe(
... prompt,
... max_new_tokens=30,
... do_sample=True,
... top_k=10,
... return_full_text = False,
... )
>>> for seq in sequences:
... print(f"{seq['generated_text']}")
Permaculture is an ecological design mimicking natural ecosystems to meet basic needs and prepare for climate change. It is based on traditional knowledge and scientific understanding.
```
#### Question answering
質åå¿çã¿ã¹ã¯ã®å Žåãããã³ãããæ¬¡ã®è«çã³ã³ããŒãã³ãã«æ§é åã§ããŸã: æç€ºãã³ã³ããã¹ãã質åã
å
é ã®åèªãŸãã¯ãã¬ãŒãº (`"Answer:"`) ã䜿çšããŠãã¢ãã«ãæäœããŠçãã®çæãéå§ããŸãã
```python
>>> torch.manual_seed(4) # doctest: +IGNORE_RESULT
>>> prompt = """Answer the question using the context below.
... Context: Gazpacho is a cold soup and drink made of raw, blended vegetables. Most gazpacho includes stale bread, tomato, cucumbers, onion, bell peppers, garlic, olive oil, wine vinegar, water, and salt. Northern recipes often include cumin and/or pimentón (smoked sweet paprika). Traditionally, gazpacho was made by pounding the vegetables in a mortar with a pestle; this more laborious method is still sometimes used as it helps keep the gazpacho cool and avoids the foam and silky consistency of smoothie versions made in blenders or food processors.
... Question: What modern tool is used to make gazpacho?
... Answer:
... """
>>> sequences = pipe(
... prompt,
... max_new_tokens=10,
... do_sample=True,
... top_k=10,
... return_full_text = False,
... )
>>> for seq in sequences:
... print(f"Result: {seq['generated_text']}")
Result: Modern tools are used, such as immersion blenders
```
#### Reasoning
LLM ã«ãšã£ãŠæšè«ã¯æãå°é£ãªã¿ã¹ã¯ã® 1 ã€ã§ãããè¯ãçµæãéæããã«ã¯ãå€ãã®å Žåãæ¬¡ã®ãããªé«åºŠãªããã³ãã ãã¯ããã¯ãé©çšããå¿
èŠããããŸãã
[Chain-of-thought](#chain-of-thought)ã
åºæ¬çãªããã³ããã䜿çšããŠãåçŽãªç®è¡ã¿ã¹ã¯ã«é¢ããã¢ãã«æšè«ãäœæã§ãããã©ãã詊ããŠã¿ãŸãããã
```python
>>> torch.manual_seed(5) # doctest: +IGNORE_RESULT
>>> prompt = """There are 5 groups of students in the class. Each group has 4 students. How many students are there in the class?"""
>>> sequences = pipe(
... prompt,
... max_new_tokens=30,
... do_sample=True,
... top_k=10,
... return_full_text = False,
... )
>>> for seq in sequences:
... print(f"Result: {seq['generated_text']}")
Result:
There are a total of 5 groups, so there are 5 x 4=20 students in the class.
```
æ£ããïŒããå°ãè€éããå¢ãããŠãåºæ¬çãªããã³ããã§åé¡ã解決ã§ãããã©ããã確èªããŠã¿ãŸãããã
```python
>>> torch.manual_seed(6) # doctest: +IGNORE_RESULT
>>> prompt = """I baked 15 muffins. I ate 2 muffins and gave 5 muffins to a neighbor. My partner then bought 6 more muffins and ate 2. How many muffins do we now have?"""
>>> sequences = pipe(
... prompt,
... max_new_tokens=10,
... do_sample=True,
... top_k=10,
... return_full_text = False,
... )
>>> for seq in sequences:
... print(f"Result: {seq['generated_text']}")
Result:
The total number of muffins now is 21
```
ããã¯ééã£ãçãã§ãã12 ã§ããå¿
èŠããããŸãããã®å Žåãããã³ãããåºæ¬çãããããéžæå
容ãåå ã§ããå¯èœæ§ããããŸãã
çµå±ã®ãšãããFalcon ã®æå°ããŒãžã§ã³ãéžæããŸããããããããµã€ãºã®ã¢ãã«ã§ã¯æšè«ãå°é£ã§ããããã倧ããªã¢ãã«ã§ã¯
ã¢ãã«ã®ããã©ãŒãã³ã¹ãåäžããå¯èœæ§ããããŸãã
## Best practices of LLM prompting
ã¬ã€ãã®ãã®ã»ã¯ã·ã§ã³ã§ã¯ãããã³ããã®çµæãæ¹åããåŸåã«ãããã¹ã ãã©ã¯ãã£ã¹ã®ãªã¹ãããŸãšããŸããã
* 䜿çšããã¢ãã«ãéžæããå Žåã¯ãææ°ãã€æãæ©èœçãªã¢ãã«ã®æ¹ãããã©ãŒãã³ã¹ãåäžããå¯èœæ§ããããŸãã
* ã·ã³ãã«ã§çãããã³ããããå§ããŠãããããç¹°ãè¿ããŸãã
* æç€ºã¯ããã³ããã®æåãŸãã¯æåŸã«å
¥åããŠãã ãããå€§èŠæš¡ãªã³ã³ããã¹ããæ±ãå Žåãã¢ãã«ã¯ããŸããŸãªæé©åãé©çšããŠãã¢ãã³ã·ã§ã³ã®è€éããäºæ¬¡çã«æ¡å€§ããã®ãé²ããŸããããã«ãããã¢ãã«ã¯ããã³ããã®éäžãããæåãŸãã¯æåŸã«æ³šæãæãããã«ãªããŸãã
* æç€ºãšããããé©çšãããããã¹ããæç¢ºã«åºå¥ããŠãã ãããããã«ã€ããŠã¯ã次ã®ã»ã¯ã·ã§ã³ã§è©³ãã説æããŸãã
* ã¿ã¹ã¯ãšæãŸããçµæ (ãã®åœ¢åŒãé·ããã¹ã¿ã€ã«ãèšèªãªã©) ã«ã€ããŠå
·äœçãã€èª¬æçã«ããŸãã
* ææ§ãªèª¬æãæç€ºã¯é¿ããŠãã ããã
*ãäœãããŠã¯ãããªããããšããæç€ºã§ã¯ãªãããäœããã¹ããããšããæç€ºãåªå
ããŸãã
* æåã®åèªãæžã㊠(ãŸãã¯ã¢ãã«ã®æåã®æãå§ããŠ)ãåºåãæ£ããæ¹åã«ãå°ãããŸãã
* [Few-shot prompting](#few-shot-prompting) ã [Chain-of-thought](#chain-of-thought) ãªã©ã®é«åºŠãªãã¯ããã¯ã䜿çšããŸãã
* ããŸããŸãªã¢ãã«ã§ããã³ããããã¹ãããŠããã®å
ç¢æ§ãè©äŸ¡ããŸãã
* ããã³ããã®ããŒãžã§ã³ã確èªããããã©ãŒãã³ã¹ã远跡ããŸãã
## Advanced prompting techniques
### Few-shot prompting
äžèšã®ã»ã¯ã·ã§ã³ã®åºæ¬çãªããã³ããã¯ãããŒãã·ã§ãããããã³ããã®äŸã§ããã€ãŸããã¢ãã«ã«ã¯ãã§ã«äžããããŠããŸãã
æç€ºãšã³ã³ããã¹ãã¯ãããŸããã解決çãå«ãäŸã¯ãããŸãããéåžžãåœä»€ããŒã¿ã»ããã«åºã¥ããŠåŸ®èª¿æŽããã LLM
ãã®ãããªããŒãã·ã§ãããã¿ã¹ã¯ã§ãåªããããã©ãŒãã³ã¹ãçºæ®ããŸãããã ããã¿ã¹ã¯ãããè€éã§ãã£ãã埮åŠãªç¹ããã£ããããå ŽåããããŸãã
åºåã«ã¯ãåœä»€ã ãã§ã¯ã¢ãã«ãçè§£ã§ããªãããã€ãã®èŠä»¶ããããŸãããã®å Žåãæ¬¡ã®ããšãã§ããŸãã
å°æ°ã·ã§ãã ããã³ãããšåŒã°ãããã¯ããã¯ã詊ããŠãã ããã
å°æ°ã·ã§ãã ããã³ããã§ã¯ãã¢ãã«ã«ããã©ãŒãã³ã¹ãåäžãããããã®ããå€ãã®ã³ã³ããã¹ããæäŸããããã³ããå
ã®äŸãæäŸãããŸãã
äŸã§ã¯ãäŸã®ãã¿ãŒã³ã«åŸã£ãŠåºåãçæããããã«ã¢ãã«ãæ¡ä»¶ä»ãããŸãã
以äžã«äŸã瀺ããŸãã
```python
>>> torch.manual_seed(0) # doctest: +IGNORE_RESULT
>>> prompt = """Text: The first human went into space and orbited the Earth on April 12, 1961.
... Date: 04/12/1961
... Text: The first-ever televised presidential debate in the United States took place on September 28, 1960, between presidential candidates John F. Kennedy and Richard Nixon.
... Date:"""
>>> sequences = pipe(
... prompt,
... max_new_tokens=8,
... do_sample=True,
... top_k=10,
... )
>>> for seq in sequences:
... print(f"Result: {seq['generated_text']}")
Result: Text: The first human went into space and orbited the Earth on April 12, 1961.
Date: 04/12/1961
Text: The first-ever televised presidential debate in the United States took place on September 28, 1960, between presidential candidates John F. Kennedy and Richard Nixon.
Date: 09/28/1960
```
äžèšã®ã³ãŒã ã¹ããããã§ã¯ãã¢ãã«ãžã®ç®çã®åºåã瀺ãããã« 1 ã€ã®äŸã䜿çšããŸããããããã£ãŠãããã¯ã
ãã¯ã³ã·ã§ãããããã³ããããã ããã¿ã¹ã¯ã®è€éãã«å¿ããŠãè€æ°ã®äŸã䜿çšããå¿
èŠãããå ŽåããããŸãã
æ°åã®ããã³ããææ³ã®å¶é:
- LLM ã¯äŸã®ãã¿ãŒã³ãçè§£ã§ããŸããããããã®ææ³ã¯è€éãªæšè«ã¿ã¹ã¯ã§ã¯ããŸãæ©èœããŸããã
- å°æ°ã·ã§ããã®ããã³ããã§ã¯ãé·ãããã³ãããäœæããå¿
èŠããããŸãã倧éã®ããŒã¯ã³ãå«ãããã³ããã§ã¯ãèšç®éãšåŸ
ã¡æéãå¢å ããå¯èœæ§ããããŸããããã³ããã®é·ãã«ãå¶éããããŸãã
- å€ãã®äŸãäžãããšãã¢ãã«ãåŠç¿ããã€ããã®ãªãã£ããã¿ãŒã³ãåŠç¿ããããšããããŸãã 3çªç®ã®æ ç»ã¬ãã¥ãŒã¯ãã€ãåŠå®çã ãšããããšã
### Chain-of-thought
æèé£é (CoT) ããã³ããã¯ãã¢ãã«ã埮調æŽããŠäžéæšè«ã¹ããããçæããæ¹åããææ³ã§ãã
è€éãªæšè«ã¿ã¹ã¯ã®çµæã
ã¢ãã«ãæäœããŠæšè«ã¹ããããçæããã«ã¯ã2 ã€ã®æ¹æ³ããããŸãã
- 質åã«å¯Ÿãã詳现ãªåçãå«ãäŸã瀺ããåé¡ã«å¯ŸåŠããæ¹æ³ãã¢ãã«ã«ç€ºãããšã§ãæ°åã®ããã³ããã衚瀺ããŸãã
- ãã¹ãããããšã«èããŠã¿ãŸãããããŸãã¯ãæ·±åŒåžããŠãåé¡ãã¹ãããããšã«è§£æ±ºããŠãã ããããªã©ã®ãã¬ãŒãºã远å ããŠã¢ãã«ã«æšè«ãæç€ºããŸãã
[æšè«ã»ã¯ã·ã§ã³](#reasoning) ã®ããã£ã³ã®äŸã« CoT ãã¯ããã¯ãé©çšãããã倧ããªã¢ãã«ã䜿çšãããšã
[HuggingChat](https://huggingface.co/chat/)ã§éã¹ã(`tiiuae/falcon-180B-chat`)ãªã©ã
æšè«çµæã¯å€§å¹
ã«æ¹åãããŸãã
```text
Let's go through this step-by-step:
1. You start with 15 muffins.
2. You eat 2 muffins, leaving you with 13 muffins.
3. You give 5 muffins to your neighbor, leaving you with 8 muffins.
4. Your partner buys 6 more muffins, bringing the total number of muffins to 14.
5. Your partner eats 2 muffins, leaving you with 12 muffins.
Therefore, you now have 12 muffins.
```
## Prompting vs fine-tuning
ããã³ãããæé©åããããšã§åªããçµæãéæã§ããŸãããã¢ãã«ã埮調æŽãããã©ããã«ã€ããŠã¯ãŸã ææ¡ãããããããŸããã
ããªãã®å Žåã«ã¯ãã£ãšããŸãããã§ããããããå°èŠæš¡ãªã¢ãã«ã埮調æŽããããšã奜ãŸãããªãã·ã§ã³ã§ããå Žåã®ããã€ãã®ã·ããªãªã次ã«ç€ºããŸãã
- ãã¡ã€ã³ã LLM ãäºåã«ãã¬ãŒãã³ã°ããããã®ãšå€§ããç°ãªã£ãŠãããåºç¯ãªããã³ããæé©åã§ã¯ååãªçµæãåŸãããŸããã§ããã
- ã¢ãã«ãäœãªãœãŒã¹èšèªã§é©åã«åäœããå¿
èŠããããŸãã
- 峿 ŒãªèŠå¶ã®äžã«ããæ©å¯ããŒã¿ã§ã¢ãã«ããã¬ãŒãã³ã°ããå¿
èŠããããŸãã
- ã³ã¹ãããã©ã€ãã·ãŒãã€ã³ãã©ã¹ãã©ã¯ãã£ããŸãã¯ãã®ä»ã®å¶éã«ãããå°èŠæš¡ãªã¢ãã«ã䜿çšããå¿
èŠããããŸãã
äžèšã®ãã¹ãŠã®äŸã§ãååãªå€§ããã®ãã¡ã€ã«ããã§ã«æã£ãŠããããç°¡åã«å
¥æã§ãããã確èªããå¿
èŠããããŸãã
ãã¡ã€ã³åºæã®ããŒã¿ã»ãããåççãªã³ã¹ãã§ã¢ãã«ã埮調æŽã§ããŸããååãªæéãšãªãœãŒã¹ãå¿
èŠã«ãªããŸã
ã¢ãã«ã埮調æŽããŸãã
äžèšã®äŸãåœãŠã¯ãŸããªãå Žåã¯ãããã³ãããæé©åããæ¹ãæçã§ããããšãããããŸãã
| 0 |
hf_public_repos/transformers/docs/source/ja | hf_public_repos/transformers/docs/source/ja/model_doc/chinese_clip.md | <!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Chinese-CLIP
## Overview
Chinese-CLIP An Yang, Junshu Pan, Junyang Lin, Rui Men, Yichang Zhang, Jingren Zhou, Chang Zhou [Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese](https://arxiv.org/abs/2211.01335) ã§ææ¡ãããŸãããåšã匵åšã
Chinese-CLIP ã¯ãäžåœèªã®ç»åãšããã¹ãã®ãã¢ã®å€§èŠæš¡ãªããŒã¿ã»ããã«å¯Ÿãã CLIP (Radford et al., 2021) ã®å®è£
ã§ããã¯ãã¹ã¢ãŒãã«æ€çŽ¢ãå®è¡ã§ããã»ãããŒãã·ã§ããç»ååé¡ããªãŒãã³ãã¡ã€ã³ãªããžã§ã¯ãæ€åºãªã©ã®ããžã§ã³ã¿ã¹ã¯ã®ããžã§ã³ããã¯ããŒã³ãšããŠãæ©èœããŸãããªãªãžãã«ã®äžåœèª-CLIPã³ãŒãã¯[ãã®ãªã³ã¯ã§](https://github.com/OFA-Sys/Chinese-CLIP)ã
è«æã®èŠçŽã¯æ¬¡ã®ãšããã§ãã
*CLIP ã®å€§æå (Radford et al., 2021) ã«ãããèŠèŠèšèªã®äºåèšç·Žã®ããã®å¯Ÿç
§åŠç¿ã®ç ç©¶ãšå¿çšãä¿é²ãããŸããããã®ç ç©¶ã§ã¯ãã»ãšãã©ã®ããŒã¿ãå
¬éãããŠããããŒã¿ã»ããããååŸãããäžåœèªã®ç»åãšããã¹ãã®ãã¢ã®å€§èŠæš¡ãªããŒã¿ã»ãããæ§ç¯ããæ°ããããŒã¿ã»ããã§äžåœèªã® CLIP ã¢ãã«ãäºåãã¬ãŒãã³ã°ããŸããåœç€Ÿã§ã¯ã7,700 äžãã 9 å 5,800 äžã®ãã©ã¡ãŒã¿ã«ããããè€æ°ã®ãµã€ãºã® 5 ã€ã®äžåœ CLIP ã¢ãã«ãéçºããŠããŸããããã«ãã¢ãã«ã®ããã©ãŒãã³ã¹ãåäžãããããã«ãæåã«ç»åãšã³ã³ãŒããŒãããªãŒãºãããŠã¢ãã«ããã¬ãŒãã³ã°ããæ¬¡ã«ãã¹ãŠã®ãã©ã¡ãŒã¿ãŒãæé©åããŠãã¬ãŒãã³ã°ãã 2 段éã®äºåãã¬ãŒãã³ã°æ¹æ³ãææ¡ããŸããç§ãã¡ã®å
æ¬çãªå®éšã§ã¯ãäžåœã® CLIP ããŒãã·ã§ããåŠç¿ãšåŸ®èª¿æŽã®ã»ããã¢ããã§ MUGEãFlickr30K-CNãããã³ COCO-CN äžã§æå
端ã®ããã©ãŒãã³ã¹ãéæã§ãããŒãã§ç«¶äºåã®ããããã©ãŒãã³ã¹ãéæã§ããããšãå®èšŒããŠããŸãã - ELEVATER ãã³ãããŒã¯ã§ã®è©äŸ¡ã«åºã¥ãã·ã§ããç»åã®åé¡ (Li et al., 2022)ãã³ãŒããäºåãã¬ãŒãã³ã°æžã¿ã¢ãã«ããã¢ããªãªãŒã¹ãããŸããã*
Chinese-CLIP ã¢ãã«ã¯ã[OFA-Sys](https://huggingface.co/OFA-Sys) ã«ãã£ãŠæäŸãããŸããã
## Usage example
以äžã®ã³ãŒã ã¹ããããã¯ãç»åãšããã¹ãã®ç¹åŸŽãšé¡äŒŒæ§ãèšç®ããæ¹æ³ã瀺ããŠããŸãã
```python
>>> from PIL import Image
>>> import requests
>>> from transformers import ChineseCLIPProcessor, ChineseCLIPModel
>>> model = ChineseCLIPModel.from_pretrained("OFA-Sys/chinese-clip-vit-base-patch16")
>>> processor = ChineseCLIPProcessor.from_pretrained("OFA-Sys/chinese-clip-vit-base-patch16")
>>> url = "https://clip-cn-beijing.oss-cn-beijing.aliyuncs.com/pokemon.jpeg"
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> # Squirtle, Bulbasaur, Charmander, Pikachu in English
>>> texts = ["æ°å°ŒéŸ", "åŠèç§å", "å°ç«éŸ", "ç®å¡äž"]
>>> # compute image feature
>>> inputs = processor(images=image, return_tensors="pt")
>>> image_features = model.get_image_features(**inputs)
>>> image_features = image_features / image_features.norm(p=2, dim=-1, keepdim=True) # normalize
>>> # compute text features
>>> inputs = processor(text=texts, padding=True, return_tensors="pt")
>>> text_features = model.get_text_features(**inputs)
>>> text_features = text_features / text_features.norm(p=2, dim=-1, keepdim=True) # normalize
>>> # compute image-text similarity scores
>>> inputs = processor(text=texts, images=image, return_tensors="pt", padding=True)
>>> outputs = model(**inputs)
>>> logits_per_image = outputs.logits_per_image # this is the image-text similarity score
>>> probs = logits_per_image.softmax(dim=1) # probs: [[1.2686e-03, 5.4499e-02, 6.7968e-04, 9.4355e-01]]
```
çŸåšã次ã®ã¹ã±ãŒã«ã®äºåãã¬ãŒãã³ã°æžã¿ Chinese-CLIP ã¢ãã«ã ð€ Hub ã§å©çšå¯èœã§ãã
- [OFA-Sys/chinese-clip-vit-base-patch16](https://huggingface.co/OFA-Sys/chinese-clip-vit-base-patch16)
- [OFA-Sys/chinese-clip-vit-large-patch14](https://huggingface.co/OFA-Sys/chinese-clip-vit-large-patch14)
- [OFA-Sys/chinese-clip-vit-large-patch14-336px](https://huggingface.co/OFA-Sys/chinese-clip-vit-large-patch14-336px)
- [OFA-Sys/chinese-clip-vit-huge-patch14](https://huggingface.co/OFA-Sys/chinese-clip-vit-huge-patch14)
## ChineseCLIPConfig
[[autodoc]] ChineseCLIPConfig
- from_text_vision_configs
## ChineseCLIPTextConfig
[[autodoc]] ChineseCLIPTextConfig
## ChineseCLIPVisionConfig
[[autodoc]] ChineseCLIPVisionConfig
## ChineseCLIPImageProcessor
[[autodoc]] ChineseCLIPImageProcessor
- preprocess
## ChineseCLIPFeatureExtractor
[[autodoc]] ChineseCLIPFeatureExtractor
## ChineseCLIPProcessor
[[autodoc]] ChineseCLIPProcessor
## ChineseCLIPModel
[[autodoc]] ChineseCLIPModel
- forward
- get_text_features
- get_image_features
## ChineseCLIPTextModel
[[autodoc]] ChineseCLIPTextModel
- forward
## ChineseCLIPVisionModel
[[autodoc]] ChineseCLIPVisionModel
- forward | 0 |
hf_public_repos/transformers/docs/source/ja | hf_public_repos/transformers/docs/source/ja/model_doc/bartpho.md | <!--Copyright 2021 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# BARTpho
## Overview
BARTpho ã¢ãã«ã¯ãNguyen Luong TranãDuong Minh LeãDat Quoc Nguyen ã«ãã£ãŠ [BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnam](https://arxiv.org/abs/2109.09701) ã§ææ¡ãããŸããã
è«æã®èŠçŽã¯æ¬¡ã®ãšããã§ãã
*BARTpho ã«ã¯ãBARTpho_word ãš BARTpho_syllable ã® 2 ã€ã®ããŒãžã§ã³ããããåã®å
¬éãããå€§èŠæš¡ãªåäžèšèªã§ãã
ãããã èªçšã«äºåãã¬ãŒãã³ã°ãããã·ãŒã±ã³ã¹ããŒã·ãŒã±ã³ã¹ ã¢ãã«ãåœç€Ÿã® BARTpho ã¯ãå€§èŠæš¡ãªãã¢ãŒããã¯ãã£ãšäºåãã¬ãŒãã³ã°ã䜿çšããŸã
ã·ãŒã±ã³ã¹éãã€ãºé€å»ã¢ãã« BART ã®ã¹ããŒã ãªã®ã§ãçæ NLP ã¿ã¹ã¯ã«ç¹ã«é©ããŠããŸããå®éš
ãããã èªããã¹ãèŠçŽã®äžæµã¿ã¹ã¯ã§ã¯ãèªåè©äŸ¡ãšäººéã«ããè©äŸ¡ã®äž¡æ¹ã§ãBARTpho ã
匷åãªããŒã¹ã©ã€ã³ mBART ãäžåããæå
ç«¯ã®æ§èœãåäžãããŸããå°æ¥ã容æã«ããããã«BARTphoããªãªãŒã¹ããŸã
çæçãªãããã èª NLP ã¿ã¹ã¯ã®ç ç©¶ãšå¿çšã*
ãã®ã¢ãã«ã¯ [dqnguyen](https://huggingface.co/dqnguyen) ã«ãã£ãŠæäŸãããŸãããå
ã®ã³ãŒã㯠[ãã¡ã](https://github.com/VinAIResearch/BARTpho) ã«ãããŸãã
## Usage example
```python
>>> import torch
>>> from transformers import AutoModel, AutoTokenizer
>>> bartpho = AutoModel.from_pretrained("vinai/bartpho-syllable")
>>> tokenizer = AutoTokenizer.from_pretrained("vinai/bartpho-syllable")
>>> line = "Chúng tÎi là những nghiên cứu viên."
>>> input_ids = tokenizer(line, return_tensors="pt")
>>> with torch.no_grad():
... features = bartpho(**input_ids) # Models outputs are now tuples
>>> # With TensorFlow 2.0+:
>>> from transformers import TFAutoModel
>>> bartpho = TFAutoModel.from_pretrained("vinai/bartpho-syllable")
>>> input_ids = tokenizer(line, return_tensors="tf")
>>> features = bartpho(**input_ids)
```
## Usage tips
- mBARTã«ç¶ããŠãBARTphoã¯BARTã®ãå€§èŠæš¡ãªãã¢ãŒããã¯ãã£ã䜿çšãããã®äžã«è¿œå ã®å±€æ£èŠåå±€ãåããŠããŸãã
ãšã³ã³ãŒããšãã³ãŒãã®äž¡æ¹ããããã£ãŠã[BART ã®ããã¥ã¡ã³ã](bart) ã®äœ¿çšäŸã¯ã䜿çšã«é©å¿ããå Žåã«äœ¿çšãããŸãã
BARTpho ã䜿çšããå Žåã¯ãBART ã«ç¹åããã¯ã©ã¹ã mBART ã«ç¹åãã察å¿ããã¯ã©ã¹ã«çœ®ãæããããšã«ãã£ãŠèª¿æŽããå¿
èŠããããŸãã
äŸãã°ïŒ
```python
>>> from transformers import MBartForConditionalGeneration
>>> bartpho = MBartForConditionalGeneration.from_pretrained("vinai/bartpho-syllable")
>>> TXT = "Chúng tÎi là <mask> nghiên cứu viên."
>>> input_ids = tokenizer([TXT], return_tensors="pt")["input_ids"]
>>> logits = bartpho(input_ids).logits
>>> masked_index = (input_ids[0] == tokenizer.mask_token_id).nonzero().item()
>>> probs = logits[0, masked_index].softmax(dim=0)
>>> values, predictions = probs.topk(5)
>>> print(tokenizer.decode(predictions).split())
```
- ãã®å®è£
ã¯ããŒã¯ã³åã®ã¿ãç®çãšããŠããŸãã`monolingual_vocab_file`ã¯ãããã èªã«ç¹åããåã§æ§æãããŠããŸã
å€èšèª XLM-RoBERTa ããå©çšã§ããäºåãã¬ãŒãã³ã°æžã¿ SentencePiece ã¢ãã«`vocab_file`ããæœåºãããŸãã
ä»ã®èšèª (ãµãã¯ãŒãã«ãã®äºåãã¬ãŒãã³ã°æžã¿å€èšèª SentencePiece ã¢ãã«`vocab_file`ã䜿çšããå Žå)
ã»ã°ã¡ã³ããŒã·ã§ã³ã«ãããç¬èªã®èšèªã«ç¹åãã`monolingual_vocab_file`ã䜿çšã㊠BartphoTokenizer ãåå©çšã§ããŸãã
## BartphoTokenizer
[[autodoc]] BartphoTokenizer
| 0 |
hf_public_repos/transformers/docs/source/ja | hf_public_repos/transformers/docs/source/ja/model_doc/biogpt.md | <!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# BioGPT
## Overview
BioGPT ã¢ãã«ã¯ã[BioGPT: generative pre-trained transformer for biomedical text generation and mining](https://academic.oup.com/bib/advance-article/doi/10.1093/bib/bbac409/6713511?guestAccessKey=a66d9b5d-4f83-4017-bb52-405815c907b9) by Renqian LuoãLiai SunãYingce Xiaã Tao QinãSheng ZhangãHoifung PoonãTie-Yan Liuã BioGPT ã¯ãçç©å»åŠããã¹ãã®çæãšãã€ãã³ã°ã®ããã®ããã¡ã€ã³åºæã®çæäºåãã¬ãŒãã³ã°æžã¿ Transformer èšèªã¢ãã«ã§ãã BioGPT ã¯ãTransformer èšèªã¢ãã«ã®ããã¯ããŒã³ã«åŸãã1,500 äžã® PubMed æé²ã§æåããäºåãã¬ãŒãã³ã°ãããŠããŸãã
è«æã®èŠçŽã¯æ¬¡ã®ãšããã§ãã
*äºåãã¬ãŒãã³ã°æžã¿èšèªã¢ãã«ã¯ãäžè¬çãªèªç¶èšèªé åã§ã®å€§ããªæåã«è§ŠçºãããŠãçç©å»åŠé åã§ãŸããŸã泚ç®ãéããŠããŸããäžè¬èšèªãã¡ã€ã³ã®äºåãã¬ãŒãã³ã°æžã¿èšèªã¢ãã«ã® 2 ã€ã®äž»ãªãã©ã³ããã€ãŸã BERT (ããã³ãã®ããªã¢ã³ã) ãš GPT (ããã³ãã®ããªã¢ã³ã) ã®ãã¡ã1 ã€ç®ã¯ BioBERT ã PubMedBERT ãªã©ã®çç©å»åŠãã¡ã€ã³ã§åºãç ç©¶ãããŠããŸãããããã¯ããŸããŸãªäžæµã®çç©å»åŠçã¿ã¹ã¯ã§å€§ããªæåãåããŠããŸãããçæèœåã®æ¬ åŠã«ããå¿çšç¯å²ãå¶éãããŠããŸãããã®è«æã§ã¯ãå€§èŠæš¡ãªçç©å»åŠæç®ã§äºåãã¬ãŒãã³ã°ããããã¡ã€ã³åºæã®çæ Transformer èšèªã¢ãã«ã§ãã BioGPT ãææ¡ããŸããç§ãã¡ã¯ 6 ã€ã®çç©å»åŠçèªç¶èšèªåŠçã¿ã¹ã¯ã§ BioGPT ãè©äŸ¡ããã»ãšãã©ã®ã¿ã¹ã¯ã§ç§ãã¡ã®ã¢ãã«ã以åã®ã¢ãã«ãããåªããŠããããšãå®èšŒããŸãããç¹ã«ãBC5CDRãKD-DTIãDDI ã®ãšã³ãããŒãšã³ãé¢ä¿æœåºã¿ã¹ã¯ã§ã¯ãããã 44.98%ã38.42%ã40.76% ã® F1 ã¹ã³ã¢ãç²åŸããPubMedQA ã§ã¯ 78.2% ã®ç²ŸåºŠãç²åŸããæ°èšé²ãæš¹ç«ããŸãããããã¹ãçæã«é¢ããç§ãã¡ã®ã±ãŒã¹ã¹ã¿ãã£ã¯ãçç©å»åŠæç®ã«ããã BioGPT ã®å©ç¹ãããã«å®èšŒããçç©å»åŠçšèªã®æµæ¢ãªèª¬æãçæããŸãã*
## Usage tips
- BioGPT ã¯çµ¶å¯Ÿäœçœ®åã蟌ã¿ãåããã¢ãã«ã§ãããããéåžžã¯å
¥åãå·ŠåŽã§ã¯ãªãå³åŽã«ããã£ã³ã°ããããšããå§ãããŸãã
- BioGPT ã¯å æèšèªã¢ããªã³ã° (CLM) ç®çã§ãã¬ãŒãã³ã°ãããŠãããããã·ãŒã±ã³ã¹å
ã®æ¬¡ã®ããŒã¯ã³ãäºæž¬ããã®ã«åŒ·åã§ãã run_generation.py ãµã³ãã« ã¹ã¯ãªããã§ç¢ºèªã§ããããã«ããã®æ©èœãå©çšãããšãBioGPT ã¯æ§æçã«äžè²«ããããã¹ããçæã§ããŸãã
- ã¢ãã«ã¯ã以åã«èšç®ãããããŒãšå€ã®ã¢ãã³ã·ã§ã³ ãã¢ã§ãã`past_key_values`(PyTorch ã®å Žå) ãå
¥åãšããŠåãåãããšãã§ããŸãããã® (past_key_values ãŸã㯠past) å€ã䜿çšãããšãã¢ãã«ãããã¹ãçæã®ã³ã³ããã¹ãã§äºåã«èšç®ãããå€ãåèšç®ã§ããªããªããŸãã PyTorch ã®äœ¿çšæ³ã®è©³çްã«ã€ããŠã¯ãBioGptForCausalLM.forward() ã¡ãœããã® past_key_values åŒæ°ãåç
§ããŠãã ããã
ãã®ã¢ãã«ã¯ã[kamalkraj](https://huggingface.co/kamalkraj) ã«ãã£ãŠæäŸãããŸãããå
ã®ã³ãŒã㯠[ãã](https://github.com/microsoft/BioGPT) ã«ãããŸãã
## Documentation resources
- [å æèšèªã¢ããªã³ã° ã¿ã¹ã¯ ã¬ã€ã](../tasks/language_modeling)
## BioGptConfig
[[autodoc]] BioGptConfig
## BioGptTokenizer
[[autodoc]] BioGptTokenizer
- save_vocabulary
## BioGptModel
[[autodoc]] BioGptModel
- forward
## BioGptForCausalLM
[[autodoc]] BioGptForCausalLM
- forward
## BioGptForTokenClassification
[[autodoc]] BioGptForTokenClassification
- forward
## BioGptForSequenceClassification
[[autodoc]] BioGptForSequenceClassification
- forward
| 0 |
hf_public_repos/transformers/docs/source/ja | hf_public_repos/transformers/docs/source/ja/model_doc/bit.md | <!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Big Transfer (BiT)
## Overview
BiT ã¢ãã«ã¯ãAlexander KolesnikovãLucas BeyerãXiaohua ZhaiãJoan PuigcerverãJessica YungãSylvain Gelly ã«ãã£ãŠ [Big Transfer (BiT): General Visual Representation Learning](https://arxiv.org/abs/1912.11370) ã§ææ¡ãããŸãããããŒã«ã»ããŒã«ãºããŒã
BiT ã¯ã[ResNet](resnet) ã®ãããªã¢ãŒããã¯ã㣠(å
·äœçã«ã¯ ResNetv2) ã®äºåãã¬ãŒãã³ã°ãã¹ã±ãŒã«ã¢ããããããã®ç°¡åãªã¬ã·ãã§ãããã®æ¹æ³ã«ããã転移åŠç¿ã倧å¹
ã«æ¹åãããŸãã
è«æã®èŠçŽã¯æ¬¡ã®ãšããã§ãã
*äºåãã¬ãŒãã³ã°ããã衚çŸã®è»¢éã«ããããµã³ãã«å¹çãåäžããèŠèŠçšã®ãã£ãŒã ãã¥ãŒã©ã« ãããã¯ãŒã¯ããã¬ãŒãã³ã°ããéã®ãã€ããŒãã©ã¡ãŒã¿ãŒèª¿æŽãç°¡çŽ åãããŸããå€§èŠæš¡ãªæåž«ããããŒã¿ã»ããã§ã®äºåãã¬ãŒãã³ã°ãšãã¿ãŒã²ãã ã¿ã¹ã¯ã§ã®ã¢ãã«ã®åŸ®èª¿æŽã®ãã©ãã€ã ã忀èšããŸããç§ãã¡ã¯äºåãã¬ãŒãã³ã°ãã¹ã±ãŒã«ã¢ããããBig Transfer (BiT) ãšåŒã¶ã·ã³ãã«ãªã¬ã·ããææ¡ããŸããããã€ãã®æ
éã«éžæãããã³ã³ããŒãã³ããçµã¿åãããã·ã³ãã«ãªãã¥ãŒãªã¹ãã£ãã¯ã䜿çšããŠè»¢éããããšã«ããã20 ãè¶
ããããŒã¿ã»ããã§åªããããã©ãŒãã³ã¹ãå®çŸããŸãã BiT ã¯ãã¯ã©ã¹ããšã« 1 ã€ã®ãµã³ãã«ããåèš 100 äžã®ãµã³ãã«ãŸã§ãé©ãã»ã©åºç¯å²ã®ããŒã¿é åã«ããã£ãŠè¯å¥œã«ããã©ãŒãã³ã¹ãçºæ®ããŸãã BiT ã¯ãILSVRC-2012 ã§ 87.5%ãCIFAR-10 ã§ 99.4%ã19 ã¿ã¹ã¯ã® Visual Task Adaptation Benchmark (VTAB) ã§ 76.3% ã®ããã 1 粟床ãéæããŸãããå°èŠæš¡ãªããŒã¿ã»ããã§ã¯ãBiT 㯠ILSVRC-2012 (ã¯ã©ã¹ããã 10 äŸ) ã§ 76.8%ãCIFAR-10 (ã¯ã©ã¹ããã 10 äŸ) ã§ 97.0% ãéæããŸãããé«ãè»¢åæ§èœãå®çŸããäž»èŠæåã詳现ã«åæâ»ã
## Usage tips
- BiT ã¢ãã«ã¯ãã¢ãŒããã¯ãã£ã®ç¹ã§ ResNetv2 ãšåçã§ãããæ¬¡ã®ç¹ãç°ãªããŸã: 1) ãã¹ãŠã®ãããæ£èŠåå±€ã [ã°ã«ãŒãæ£èŠå](https://arxiv.org/abs/1803.08494) ã«çœ®ãæããããŸãã
2) [éã¿ã®æšæºå](https://arxiv.org/abs/1903.10520) ã¯ç³ã¿èŸŒã¿å±€ã«äœ¿çšãããŸããèè
ãã¯ãäž¡æ¹ã®çµã¿åããã倧ããªããããµã€ãºã§ã®ãã¬ãŒãã³ã°ã«åœ¹ç«ã¡ãéèŠãªå¹æãããããšã瀺ããŠããŸãã
転移åŠç¿ãžã®åœ±é¿ã
ãã®ã¢ãã«ã¯ã[nielsr](https://huggingface.co/nielsr) ã«ãã£ãŠæäŸãããŸããã
å
ã®ã³ãŒã㯠[ãã¡ã](https://github.com/google-research/big_transfer) ã«ãããŸãã
## Resources
BiT ãå§ããã®ã«åœ¹ç«ã€å
¬åŒ Hugging Face ããã³ã³ãã¥ãã㣠(ð ã§ç€ºãããŠãã) ãªãœãŒã¹ã®ãªã¹ãã
<PipelineTag pipeline="image-classification"/>
- [`BitForImageClassification`] ã¯ããã® [ãµã³ãã« ã¹ã¯ãªãã](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) ããã³ [ããŒãããã¯](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb)ã
- åç
§: [ç»ååé¡ã¿ã¹ã¯ ã¬ã€ã](../tasks/image_classification)
ããã«å«ãããªãœãŒã¹ã®éä¿¡ã«èå³ãããå Žåã¯ããæ°è»œã«ãã« ãªã¯ãšã¹ããéããŠãã ããã審æ»ãããŠããã ããŸãããªãœãŒã¹ã¯ãæ¢åã®ãªãœãŒã¹ãè€è£œããã®ã§ã¯ãªããäœãæ°ãããã®ã瀺ãããšãçæ³çã§ãã
## BitConfig
[[autodoc]] BitConfig
## BitImageProcessor
[[autodoc]] BitImageProcessor
- preprocess
## BitModel
[[autodoc]] BitModel
- forward
## BitForImageClassification
[[autodoc]] BitForImageClassification
- forward
| 0 |
hf_public_repos/transformers/docs/source/ja | hf_public_repos/transformers/docs/source/ja/model_doc/code_llama.md | <!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contains specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# CodeLlama
## Overview
Code Llama ã¢ãã«ã¯ã«ãã£ãŠ [Code Llama: Open Foundation Models for Code](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/) ã§ææ¡ãããŸããã Baptiste RoziÚre, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve.
è«æã®èŠçŽã¯æ¬¡ã®ãšããã§ãã
*ç§ãã¡ã¯ Code Llama ããªãªãŒã¹ããŸãããã㯠Llama 2 ã«åºã¥ãã³ãŒãã®å€§èŠæš¡èšèªã¢ãã« ãã¡ããªã§ããããªãŒãã³ ã¢ãã«ã®äžã§æå
端ã®ããã©ãŒãã³ã¹ãåãèŸŒã¿æ©èœãå€§èŠæš¡ãªå
¥åã³ã³ããã¹ãã®ãµããŒããããã°ã©ãã³ã° ã¿ã¹ã¯ã®ãŒãã·ã§ããåœä»€è¿œåŸæ©èœãæäŸããŸãã ãå¹
åºãã¢ããªã±ãŒã·ã§ã³ãã«ããŒããããã®è€æ°ã®ãã¬ãŒããŒãæäŸããŠããŸããåºç€ã¢ãã« (Code Llama)ãPython ç¹å (Code Llama - Python)ãããã³ãããã 7Bã13Bãããã³ 34B ãã©ã¡ãŒã¿ãŒãåããåœä»€è¿œåŸã¢ãã« (Code Llama - Instruct) ã§ãããã¹ãŠã®ã¢ãã«ã¯ 16,000 ããŒã¯ã³ã®ã·ãŒã±ã³ã¹ã§ãã¬ãŒãã³ã°ãããæå€§ 100,000 ããŒã¯ã³ã®å
¥åã§æ¹åãèŠãããŸãã 7B ããã³ 13B ã³ãŒã ã©ããšã³ãŒã ã©ã - åœä»€ããªã¢ã³ãã¯ãåšå²ã®ã³ã³ãã³ãã«åºã¥ããåã蟌ã¿ããµããŒãããŸãã Code Llama ã¯ãããã€ãã®ã³ãŒã ãã³ãããŒã¯ã§ãªãŒãã³ ã¢ãã«ã®äžã§æå
端ã®ããã©ãŒãã³ã¹ã«éããHumanEval ãš MBPP ã§ããããæå€§ 53% ãš 55% ã®ã¹ã³ã¢ãç²åŸããŸãããç¹ã«ãCode Llama - Python 7B 㯠HumanEval ããã³ MBPP äžã§ Llama 2 70B ãããåªããããã©ãŒãã³ã¹ã瀺ãããã¹ãŠã®ã¢ãã«ã¯ MultiPL-E äžã§å
¬éãããŠããä»ã®ãã¹ãŠã®ã¢ãã«ãããåªããŠããŸããç§ãã¡ã¯ãç ç©¶ãšåæ¥å©çšã®äž¡æ¹ãèš±å¯ããå¯å®¹ãªã©ã€ã»ã³ã¹ã«åºã¥ã㊠Code Llama ããªãªãŒã¹ããŠããŸãã*
ãã¹ãŠã® Code Llama ã¢ãã« ãã§ãã¯ãã€ã³ãã [ãã¡ã](https://huggingface.co/models?search=code_llama) ã§ç¢ºèªãã[codellama org](https://huggingface.co/codellama) ã§æ£åŒã«ãªãªãŒã¹ããããã§ãã¯ãã€ã³ãã確èªããŠãã ããã
ãã®ã¢ãã«ã¯ [ArthurZucker](https://huggingface.co/ArthurZ) ã«ãã£ãŠæäŸãããŸãããèè
ã®ãªãªãžãã«ã®ã³ãŒã㯠[ãã¡ã](https://github.com/facebookresearch/llama) ã«ãããŸãã
## Usage tips and examples
<Tip warning={true}>
Code Llama ã®ããŒã¹ãšãªã`Llama2`ãã¡ããªãŒ ã¢ãã«ã¯ã`bfloat16`ã䜿çšããŠãã¬ãŒãã³ã°ãããŸããããå
ã®æšè«ã§ã¯`float16`ã䜿çšããŸããããŸããŸãªç²ŸåºŠãèŠãŠã¿ãŸãããã
* `float32`: ã¢ãã«ã®åæåã«é¢ãã PyTorch ã®èŠçŽã§ã¯ãã¢ãã«ã®éã¿ãã©ã® `dtype` ã§æ ŒçŽããããã«é¢ä¿ãªããã¢ãã«ã `float32` ã«ããŒãããŸãã ãtransformersãããPyTorch ãšã®äžè²«æ§ãä¿ã€ããã«ãã®èŠåã«åŸã£ãŠããŸããããã¯ããã©ã«ãã§éžæãããŸãã `AutoModel` API ã§ã¹ãã¬ãŒãžã®éã¿ä»ãã¿ã€ãã䜿çšããŠãã§ãã¯ãã€ã³ãã®ããŒãããã£ã¹ãããå Žåã¯ã`torch_dtype="auto"` ãæå®ããå¿
èŠããããŸãã `model = AutoModelForCausalLM.from_pretrained("path", torch_dtype = "auto")`ã
* `bfloat16`: ã³ãŒã Llama ã¯ãã®ç²ŸåºŠã§ãã¬ãŒãã³ã°ãããŠããããããããªããã¬ãŒãã³ã°ã埮調æŽã«äœ¿çšããããšããå§ãããŸãã
* `float16`: ãã®ç²ŸåºŠã䜿çšããŠæšè«ãå®è¡ããããšããå§ãããŸããé垞㯠`bfloat16` ããé«éã§ãããè©äŸ¡ã¡ããªã¯ã¹ã«ã¯ `bfloat16` ãšæ¯ã¹ãŠæãããªäœäžãèŠãããªãããã§ãã bfloat16 ã䜿çšããŠæšè«ãå®è¡ããããšãã§ããŸãã埮調æŽåŸãfloat16 ãš bfloat16 ã®äž¡æ¹ã§æšè«çµæã確èªããããšããå§ãããŸãã
äžã§è¿°ã¹ãããã«ãã¢ãã«ãåæåãããšãã« `torch_dtype="auto"` ã䜿çšããªãéããã¹ãã¬ãŒãžã®éã¿ã® `dtype` ã¯ã»ãšãã©ç¡é¢ä¿ã§ãããã®çç±ã¯ãã¢ãã«ãæåã«ããŠã³ããŒããã (ãªã³ã©ã€ã³ã®ãã§ãã¯ãã€ã³ãã® `dtype` ã䜿çš)ãæ¬¡ã« `torch` ã®ããã©ã«ãã® `dtype` ã«ãã£ã¹ããããããã§ã (`torch.float32` ã«ãªããŸã)ãæå®ããã `torch_dtype` ãããå Žåã¯ã代ããã«ããã䜿çšãããŸãã
</Tip>
ãããïŒ
- å
å¡«ã¿ã¹ã¯ã¯ããã«ãµããŒããããŸããå
¥åãåãããå Žæã«ã¯ `tokenizer.fill_token` ã䜿çšããå¿
èŠããããŸãã
- ã¢ãã«å€æã¹ã¯ãªããã¯ã`Llama2` ãã¡ããªã®å Žåãšåãã§ãã
䜿çšäŸã¯æ¬¡ã®ãšããã§ãã
```bash
python src/transformers/models/llama/convert_llama_weights_to_hf.py \
--input_dir /path/to/downloaded/llama/weights --model_size 7B --output_dir /output/path
```
ã¹ã¯ãªãããå®è¡ããã«ã¯ã(æå€§ã®ããŒãžã§ã³ã§ãã£ãŠã) float16 粟床ã§ã¢ãã«å
šäœããã¹ãããã®ã«åå㪠CPU RAM ãå¿
èŠã§ããããšã«æ³šæããŠãã ããã
ããã€ãã®ãã§ãã¯ãã€ã³ãããããããããã«ã¢ãã«ã®åéã¿ã®äžéšãå«ãŸããŠããããããã¹ãŠã RAM ã«ããŒãããå¿
èŠããããŸã)ã
倿åŸãã¢ãã«ãšããŒã¯ãã€ã¶ãŒã¯æ¬¡ã®æ¹æ³ã§ããŒãã§ããŸãã
```python
>>> from transformers import LlamaForCausalLM, CodeLlamaTokenizer
>>> tokenizer = CodeLlamaTokenizer.from_pretrained("codellama/CodeLlama-7b-hf")
>>> model = LlamaForCausalLM.from_pretrained("codellama/CodeLlama-7b-hf")
>>> PROMPT = '''def remove_non_ascii(s: str) -> str:
""" <FILL_ME>
return result
'''
>>> input_ids = tokenizer(PROMPT, return_tensors="pt")["input_ids"]
>>> generated_ids = model.generate(input_ids, max_new_tokens=128)
>>> filling = tokenizer.batch_decode(generated_ids[:, input_ids.shape[1]:], skip_special_tokens = True)[0]
>>> print(PROMPT.replace("<FILL_ME>", filling))
def remove_non_ascii(s: str) -> str:
""" Remove non-ASCII characters from a string.
Args:
s: The string to remove non-ASCII characters from.
Returns:
The string with non-ASCII characters removed.
"""
result = ""
for c in s:
if ord(c) < 128:
result += c
return result
```
å¡ãã€ã¶ãããéšåã ããå¿
èŠãªå Žå:
```python
>>> from transformers import pipeline
>>> import torch
>>> generator = pipeline("text-generation",model="codellama/CodeLlama-7b-hf",torch_dtype=torch.float16, device_map="auto")
>>> generator('def remove_non_ascii(s: str) -> str:\n """ <FILL_ME>\n return result', max_new_tokens = 128, return_type = 1)
```
å
éšã§ã¯ãããŒã¯ãã€ã¶ãŒã [`<FILL_ME>` ã«ãã£ãŠèªåçã«åå²](https://huggingface.co/docs/transformers/main/model_doc/code_llama#transformers.CodeLlamaTokenizer.fill_token) ããŠã[ ã«ç¶ãæžåŒèšå®ãããå
¥åæååãäœæããŸãããªãªãžãã«ã®ãã¬ãŒãã³ã° ãã¿ãŒã³](https://github.com/facebookresearch/codellama/blob/cb51c14ec761370ba2e2bc351374a79265d0465e/llama/generation.py#L402)ãããã¯ããã¿ãŒã³ãèªåã§æºåãããããå
ç¢ã§ããããŒã¯ã³ã®æ¥çãªã©ããããã°ãéåžžã«é£ããèœãšã穎ãåé¿ã§ããŸãããã®ã¢ãã«ãŸãã¯ä»ã®ã¢ãã«ã«å¿
èŠãª CPU ããã³ GPU ã¡ã¢ãªã®éã確èªããã«ã¯ããã®å€ã決å®ããã®ã«åœ¹ç«ã€ [ãã®èšç®ããŒã«](https://huggingface.co/spaces/hf-accelerate/model-memory-usage) ã詊ããŠãã ããã
LLaMA ããŒã¯ãã€ã¶ãŒã¯ã[sentencepiece](https://github.com/google/sentencepiece) ã«åºã¥ã BPE ã¢ãã«ã§ããã»ã³ãã³ã¹ããŒã¹ã®çã® 1 ã€ã¯ãã·ãŒã±ã³ã¹ããã³ãŒããããšãã«ãæåã®ããŒã¯ã³ãåèªã®å
é (äŸ: ãBananaã) ã§ããå ŽåãããŒã¯ãã€ã¶ãŒã¯æååã®å
é ã«ãã¬ãã£ãã¯ã¹ ã¹ããŒã¹ã远å ããªãããšã§ãã
<Tip>
ã³ãŒã Llama ã¯ã`Llama2` ã¢ãã«ãšåãã¢ãŒããã¯ãã£ãæã£ãŠããŸããAPI ãªãã¡ã¬ã³ã¹ã«ã€ããŠã¯ã[Llama2 ã®ããã¥ã¡ã³ã ããŒãž](llama2) ãåç
§ããŠãã ããã
以äžã® Code Llama ããŒã¯ãã€ã¶ãŒã®ãªãã¡ã¬ã³ã¹ãèŠã€ããŠãã ããã
</Tip>
## CodeLlamaTokenizer
[[autodoc]] CodeLlamaTokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
## CodeLlamaTokenizerFast
[[autodoc]] CodeLlamaTokenizerFast
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- update_post_processor
- save_vocabulary
| 0 |
hf_public_repos/transformers/docs/source/ja | hf_public_repos/transformers/docs/source/ja/model_doc/bark.md | <!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Bark
## Overview
Bark ã¯ã[suno-ai/bark](https://github.com/suno-ai/bark) ã§ Suno AI ã«ãã£ãŠææ¡ããããã©ã³ã¹ãã©ãŒããŒããŒã¹ã®ããã¹ãèªã¿äžãã¢ãã«ã§ãã
Bark 㯠4 ã€ã®äž»èŠãªã¢ãã«ã§æ§æãããŠããŸãã
- [`BarkSemanticModel`] ('ããã¹ã'ã¢ãã«ãšãåŒã°ãã): ããŒã¯ã³åãããããã¹ããå
¥åãšããŠåãåããããã¹ãã®æå³ãæããã»ãã³ãã£ã㯠ããã¹ã ããŒã¯ã³ãäºæž¬ããå æçèªå·±ååž°å€æã¢ãã«ã
- [`BarkCoarseModel`] ('ç²ãé³é¿' ã¢ãã«ãšãåŒã°ãã): [`BarkSemanticModel`] ã¢ãã«ã®çµæãå
¥åãšããŠåãåãå æçèªå·±ååž°å€æåšã EnCodec ã«å¿
èŠãªæåã® 2 ã€ã®ãªãŒãã£ãª ã³ãŒãããã¯ãäºæž¬ããããšãç®çãšããŠããŸãã
- [`BarkFineModel`] ('埮现é³é¿' ã¢ãã«)ãä»åã¯éå æçãªãŒããšã³ã³ãŒã㌠ãã©ã³ã¹ãã©ãŒããŒã§ã以åã®ã³ãŒãããã¯åã蟌ã¿ã®åèšã«åºã¥ããŠæåŸã®ã³ãŒãããã¯ãç¹°ãè¿ãäºæž¬ããŸãã
- [`EncodecModel`] ãããã¹ãŠã®ã³ãŒããã㯠ãã£ãã«ãäºæž¬ããã®ã§ãBark ã¯ããã䜿çšããŠåºåãªãŒãã£ãªé
åããã³ãŒãããŸãã
æåã® 3 ã€ã®ã¢ãžã¥ãŒã«ã¯ãããããç¹å®ã®äºåå®çŸ©ãããé³å£°ã«åŸã£ãŠåºåãµãŠã³ãã調æŽããããã®æ¡ä»¶ä»ãã¹ããŒã«ãŒåã蟌ã¿ããµããŒãã§ããããšã«æ³šæããŠãã ããã
### Optimizing Bark
Bark ã¯ãã³ãŒããæ°è¡è¿œå ããã ãã§æé©åã§ãã**ã¡ã¢ãª ãããããªã³ãã倧å¹
ã«åæž**ããã**æšè«ãé«éå**ãããŸãã
#### Using half-precision
ã¢ãã«ãå粟床ã§ããŒãããã ãã§ãæšè«ãé«éåããã¡ã¢ãªäœ¿çšéã 50% åæžã§ããŸãã
```python
from transformers import BarkModel
import torch
device = "cuda" if torch.cuda.is_available() else "cpu"
model = BarkModel.from_pretrained("suno/bark-small", torch_dtype=torch.float16).to(device)
```
#### Using ð€ Better Transformer
Better Transformer ã¯ãå
éšã§ã«ãŒãã«èåãå®è¡ãã ð€ æé©ãªæ©èœã§ããããã©ãŒãã³ã¹ãäœäžãããããšãªããé床ã 20% ïœ 30% åäžãããããšãã§ããŸããã¢ãã«ã ð€ Better Transformer ã«ãšã¯ã¹ããŒãããã®ã«å¿
èŠãªã³ãŒã㯠1 è¡ã ãã§ãã
```python
model = model.to_bettertransformer()
```
ãã®æ©èœã䜿çšããåã« ð€ Optimum ãã€ã³ã¹ããŒã«ããå¿
èŠãããããšã«æ³šæããŠãã ããã [ã€ã³ã¹ããŒã«æ¹æ³ã¯ãã¡ã](https://huggingface.co/docs/optimum/installation)
#### Using CPU offload
åè¿°ããããã«ãBark 㯠4 ã€ã®ãµãã¢ãã«ã§æ§æãããŠããããªãŒãã£ãªçæäžã«é çªã«åŒã³åºãããŸããèšãæããã°ã1 ã€ã®ãµãã¢ãã«ã䜿çšãããŠããéãä»ã®ãµãã¢ãã«ã¯ã¢ã€ãã«ç¶æ
ã«ãªããŸãã
CUDA ããã€ã¹ã䜿çšããŠããå Žåãã¡ã¢ãª ãããããªã³ãã® 80% åæžã«ããæ©æµãåããç°¡åãªè§£æ±ºçã¯ãã¢ã€ãã«ç¶æ
ã® GPU ã®ãµãã¢ãã«ããªãããŒãããããšã§ãããã®æäœã¯ CPU ãªãããŒããšåŒã°ããŸãã 1è¡ã®ã³ãŒãã§äœ¿çšã§ããŸãã
```python
model.enable_cpu_offload()
```
ãã®æ©èœã䜿çšããåã«ãð€ Accelerate ãã€ã³ã¹ããŒã«ããå¿
èŠãããããšã«æ³šæããŠãã ããã [ã€ã³ã¹ããŒã«æ¹æ³ã¯ãã¡ã](https://huggingface.co/docs/accelerate/basic_tutorials/install)
#### Combining optimization techniques
æé©åææ³ãçµã¿åãããŠãCPU ãªãããŒããå粟床ãð€ Better Transformer ããã¹ãŠäžåºŠã«äœ¿çšã§ããŸãã
```python
from transformers import BarkModel
import torch
device = "cuda" if torch.cuda.is_available() else "cpu"
# load in fp16
model = BarkModel.from_pretrained("suno/bark-small", torch_dtype=torch.float16).to(device)
# convert to bettertransformer
model = BetterTransformer.transform(model, keep_original_model=False)
# enable CPU offload
model.enable_cpu_offload()
```
æšè«æé©åææ³ã®è©³çްã«ã€ããŠã¯ã[ãã¡ã](https://huggingface.co/docs/transformers/perf_infer_gpu_one) ãã芧ãã ããã
### Tips
Suno ã¯ãå€ãã®èšèªã§é³å£°ããªã»ããã®ã©ã€ãã©ãªãæäŸããŠããŸã [ãã¡ã](https://suno-ai.notion.site/8b8e8749ed514b0cbf3f699013548683?v=bc67cff786b04b50b3ceb756fd05f68c)ã
ãããã®ããªã»ããã¯ããã [ãã¡ã](https://huggingface.co/suno/bark-small/tree/main/speaker_embeddings) ãŸã㯠[ãã¡ã](https://huggingface.co/suno/bark/tree/main/speaker_embeddings)ã
```python
>>> from transformers import AutoProcessor, BarkModel
>>> processor = AutoProcessor.from_pretrained("suno/bark")
>>> model = BarkModel.from_pretrained("suno/bark")
>>> voice_preset = "v2/en_speaker_6"
>>> inputs = processor("Hello, my dog is cute", voice_preset=voice_preset)
>>> audio_array = model.generate(**inputs)
>>> audio_array = audio_array.cpu().numpy().squeeze()
```
Bark ã¯ãéåžžã«ãªã¢ã«ãª **å€èšèª** é³å£°ã ãã§ãªãã鳿¥œãèæ¯ãã€ãºãåçŽãªå¹æé³ãªã©ã®ä»ã®é³å£°ãçæã§ããŸãã
```python
>>> # Multilingual speech - simplified Chinese
>>> inputs = processor("æäººçïŒæäŒè¯Žäžæ")
>>> # Multilingual speech - French - let's use a voice_preset as well
>>> inputs = processor("Incroyable! Je peux générer du son.", voice_preset="fr_speaker_5")
>>> # Bark can also generate music. You can help it out by adding music notes around your lyrics.
>>> inputs = processor("⪠Hello, my dog is cute âª")
>>> audio_array = model.generate(**inputs)
>>> audio_array = audio_array.cpu().numpy().squeeze()
```
ãã®ã¢ãã«ã¯ãç¬ããããæ¯ãæ³£ããªã©ã®**éèšèªã³ãã¥ãã±ãŒã·ã§ã³**ãçæããããšãã§ããŸãã
```python
>>> # Adding non-speech cues to the input text
>>> inputs = processor("Hello uh ... [clears throat], my dog is cute [laughter]")
>>> audio_array = model.generate(**inputs)
>>> audio_array = audio_array.cpu().numpy().squeeze()
```
ãªãŒãã£ãªãä¿åããã«ã¯ãã¢ãã«èšå®ãš scipy ãŠãŒãã£ãªãã£ãããµã³ãã« ã¬ãŒããååŸããã ãã§ãã
```python
>>> from scipy.io.wavfile import write as write_wav
>>> # save audio to disk, but first take the sample rate from the model config
>>> sample_rate = model.generation_config.sample_rate
>>> write_wav("bark_generation.wav", sample_rate, audio_array)
```
ãã®ã¢ãã«ã¯ã[Yoach Lacombe (ylacombe)](https://huggingface.co/ylacombe) ããã³ [Sanchit Gandhi (sanchit-gandhi)](https://github.com/sanchit-gandhi) ã«ãã£ãŠæäŸãããŸããã
å
ã®ã³ãŒã㯠[ãã](https://github.com/suno-ai/bark) ã«ãããŸãã
## BarkConfig
[[autodoc]] BarkConfig
- all
## BarkProcessor
[[autodoc]] BarkProcessor
- all
- __call__
## BarkModel
[[autodoc]] BarkModel
- generate
- enable_cpu_offload
## BarkSemanticModel
[[autodoc]] BarkSemanticModel
- forward
## BarkCoarseModel
[[autodoc]] BarkCoarseModel
- forward
## BarkFineModel
[[autodoc]] BarkFineModel
- forward
## BarkCausalModel
[[autodoc]] BarkCausalModel
- forward
## BarkCoarseConfig
[[autodoc]] BarkCoarseConfig
- all
## BarkFineConfig
[[autodoc]] BarkFineConfig
- all
## BarkSemanticConfig
[[autodoc]] BarkSemanticConfig
- all
| 0 |
hf_public_repos/transformers/docs/source/ja | hf_public_repos/transformers/docs/source/ja/model_doc/blip.md | <!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# BLIP
## Overview
BLIP ã¢ãã«ã¯ã[BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation](https://arxiv.org/abs/2201.12086) ã§ Junnan LiãDongxu LiãCaiming XiongãSteven Hoi ã«ãã£ãŠææ¡ãããŸããã ã
BLIP ã¯ã次ã®ãããªããŸããŸãªãã«ãã¢ãŒãã« ã¿ã¹ã¯ãå®è¡ã§ããã¢ãã«ã§ãã
- èŠèŠçãªè³ªåå¿ç
- ç»åãšããã¹ãã®æ€çŽ¢ïŒç»åãšããã¹ãã®ãããã³ã°ïŒ
- ç»åãã£ãã·ã§ã³
è«æã®èŠçŽã¯æ¬¡ã®ãšããã§ãã
*èŠèŠèšèªäºåãã¬ãŒãã³ã° (VLP) ã«ãããå€ãã®èŠèŠèšèªã¿ã¹ã¯ã®ããã©ãŒãã³ã¹ãåäžããŸããã
ãã ããæ¢åã®äºåãã¬ãŒãã³ã°æžã¿ã¢ãã«ã®ã»ãšãã©ã¯ãçè§£ããŒã¹ã®ã¿ã¹ã¯ãŸãã¯äžä»£ããŒã¹ã®ã¿ã¹ã¯ã®ããããã§ã®ã¿åªããŠããŸããããã«ãæé©ã§ã¯ãªãç£èŠãœãŒã¹ã§ãã Web ããåéããããã€ãºã®å€ãç»åãšããã¹ãã®ãã¢ã䜿çšããŠããŒã¿ã»ãããã¹ã±ãŒã«ã¢ããããããšã§ãããã©ãŒãã³ã¹ã®åäžã倧å¹
ã«éæãããŸããããã®è«æã§ã¯ãèŠèŠèšèªã®çè§£ãšçæã¿ã¹ã¯ã®äž¡æ¹ã«æè»ã«ç§»è¡ããæ°ãã VLP ãã¬ãŒã ã¯ãŒã¯ã§ãã BLIP ãææ¡ããŸãã BLIP ã¯ããã£ãã·ã§ã³ãããŒãã¹ãã©ããããããšã§ãã€ãºã®å€ã Web ããŒã¿ã广çã«å©çšããŸãããã£ãã·ã§ããŒãåæãã£ãã·ã§ã³ãçæãããã£ã«ã¿ãŒããã€ãºã®å€ããã£ãã·ã§ã³ãé€å»ããŸããç»åããã¹ãæ€çŽ¢ (å¹³ååçŸç +2.7%@1)ãç»åãã£ãã·ã§ã³äœæ (CIDEr ã§ +2.8%)ãVQA ( VQA ã¹ã³ã¢ã¯ +1.6%)ã BLIP ã¯ããŒãã·ã§ããæ¹åŒã§ãããªèšèªã¿ã¹ã¯ã«çŽæ¥è»¢éããå Žåã«ãã匷åãªäžè¬åèœåãçºæ®ããŸããã³ãŒããã¢ãã«ãããŒã¿ã»ããããªãªãŒã¹ãããŠããŸãã*

ãã®ã¢ãã«ã¯ [ybelkada](https://huggingface.co/ybelkada) ã«ãã£ãŠæäŸãããŸããã
å
ã®ã³ãŒã㯠[ãã](https://github.com/salesforce/BLIP) ã«ãããŸãã
## Resources
- [Jupyter ããŒãããã¯](https://github.com/huggingface/notebooks/blob/main/examples/image_captioning_blip.ipynb) ã«ã¹ã¿ã ããŒã¿ã»ããã®ç»åãã£ãã·ã§ã³çšã« BLIP ã埮調æŽããæ¹æ³
## BlipConfig
[[autodoc]] BlipConfig
- from_text_vision_configs
## BlipTextConfig
[[autodoc]] BlipTextConfig
## BlipVisionConfig
[[autodoc]] BlipVisionConfig
## BlipProcessor
[[autodoc]] BlipProcessor
## BlipImageProcessor
[[autodoc]] BlipImageProcessor
- preprocess
<frameworkcontent>
<pt>
## BlipModel
[[autodoc]] BlipModel
- forward
- get_text_features
- get_image_features
## BlipTextModel
[[autodoc]] BlipTextModel
- forward
## BlipVisionModel
[[autodoc]] BlipVisionModel
- forward
## BlipForConditionalGeneration
[[autodoc]] BlipForConditionalGeneration
- forward
## BlipForImageTextRetrieval
[[autodoc]] BlipForImageTextRetrieval
- forward
## BlipForQuestionAnswering
[[autodoc]] BlipForQuestionAnswering
- forward
</pt>
<tf>
## TFBlipModel
[[autodoc]] TFBlipModel
- call
- get_text_features
- get_image_features
## TFBlipTextModel
[[autodoc]] TFBlipTextModel
- call
## TFBlipVisionModel
[[autodoc]] TFBlipVisionModel
- call
## TFBlipForConditionalGeneration
[[autodoc]] TFBlipForConditionalGeneration
- call
## TFBlipForImageTextRetrieval
[[autodoc]] TFBlipForImageTextRetrieval
- call
## TFBlipForQuestionAnswering
[[autodoc]] TFBlipForQuestionAnswering
- call
</tf>
</frameworkcontent> | 0 |
hf_public_repos/transformers/docs/source/ja | hf_public_repos/transformers/docs/source/ja/model_doc/bort.md | <!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# BORT
<Tip warning={true}>
ãã®ã¢ãã«ã¯ã¡ã³ããã³ã¹ ã¢ãŒãã®ã¿ã§ãããã³ãŒãã倿Žããæ°ãã PR ã¯åãä»ããããŸããã
ãã®ã¢ãã«ã®å®è¡äžã«åé¡ãçºçããå Žåã¯ããã®ã¢ãã«ããµããŒãããŠããæåŸã®ããŒãžã§ã³ (v4.30.0) ãåã€ã³ã¹ããŒã«ããŠãã ããã
ãããè¡ãã«ã¯ãã³ãã³ã `pip install -U Transformers==4.30.0` ãå®è¡ããŸãã
</Tip>
## Overview
BORT ã¢ãã«ã¯ã[Optimal Subarchitecture Extraction for BERT](https://arxiv.org/abs/2010.10499) ã§ææ¡ãããŸããã
Adrian de Wynter and Daniel J. Perry.ããã¯ãBERT ã®ã¢ãŒããã¯ã㣠ãã©ã¡ãŒã¿ã®æé©ãªãµãã»ããã§ãã
èè
ã¯ããã«ãããšåŒãã§ããŸãã
è«æã®èŠçŽã¯æ¬¡ã®ãšããã§ãã
*Devlin ããã BERT ã¢ãŒããã¯ãã£ã®ã¢ãŒããã¯ã㣠ãã©ã¡ãŒã¿ã®æé©ãªãµãã»ãããæœåºããŸãã (2018)
ãã¥ãŒã©ã« ã¢ãŒããã¯ãã£æ€çŽ¢ã®ã¢ã«ãŽãªãºã ã«ãããæè¿ã®ç»æçãªæè¡ãé©çšããŸãããã®æé©ãªãµãã»ãããæ¬¡ã®ããã«åŒã³ãŸãã
"Bort" ã¯æããã«å°ãããæå¹ (ã€ãŸããåã蟌ã¿å±€ãèæ
®ããªã) ãµã€ãºã¯ 5.5% ã§ãã
ãªãªãžãã«ã® BERT å€§èŠæš¡ã¢ãŒããã¯ãã£ãããã³ããã ãµã€ãºã® 16%ã Bort 㯠288 GPU æéã§äºåãã¬ãŒãã³ã°ããããšãã§ããŸãã
æé«ããã©ãŒãã³ã¹ã® BERT ãã©ã¡ããªã㯠ã¢ãŒããã¯ã㣠ããªã¢ã³ãã§ãã RoBERTa-large ã®äºåãã¬ãŒãã³ã°ã«å¿
èŠãªæéã® 1.2%
(Liu et al., 2019)ãåããã·ã³ã§ BERT-large ããã¬ãŒãã³ã°ããã®ã«å¿
èŠãª GPU æéã®äžçèšé²ã®çŽ 33%
ããŒããŠã§ã¢ããŸããCPU äžã§ 7.9 åé«éã§ããã ãã§ãªããä»ã®å§çž®ããŒãžã§ã³ãããããã©ãŒãã³ã¹ãåªããŠããŸãã
ã¢ãŒããã¯ãã£ãããã³äžéšã®éå§çž®ããªã¢ã³ã: 0.3% ïœ 31% ã®ããã©ãŒãã³ã¹åäžãåŸãããŸãã
BERT-large ã«é¢ããŠãè€æ°ã®å
¬éèªç¶èšèªçè§£ (NLU) ãã³ãããŒã¯ã«ããã絶察çãªè©äŸ¡ã*
ãã®ã¢ãã«ã¯ [stefan-it](https://huggingface.co/stefan-it) ã«ãã£ãŠæäŸãããŸãããå
ã®ã³ãŒãã¯[ãã](https://github.com/alexa/bort/)ã«ãããŸãã
## Usage tips
- BORT ã®ã¢ãã« ã¢ãŒããã¯ãã£ã¯ BERT ã«åºã¥ããŠããŸãã詳现ã«ã€ããŠã¯ã[BERT ã®ããã¥ã¡ã³ã ããŒãž](bert) ãåç
§ããŠãã ããã
ã¢ãã«ã® API ãªãã¡ã¬ã³ã¹ãšäœ¿çšäŸã
- BORT 㯠BERT ããŒã¯ãã€ã¶ãŒã®ä»£ããã« RoBERTa ããŒã¯ãã€ã¶ãŒã䜿çšããŸããããŒã¯ãã€ã¶ãŒã® API ãªãã¡ã¬ã³ã¹ãšäœ¿çšäŸã«ã€ããŠã¯ã[RoBERTa ã®ããã¥ã¡ã³ã ããŒãž](roberta) ãåç
§ããŠãã ããã
- BORT ã«ã¯ã [Agora](https://adewynter.github.io/notes/bort_algorithms_and_applications.html#fine-tuning-with-algebraic-topology) ãšåŒã°ããç¹å®ã®åŸ®èª¿æŽã¢ã«ãŽãªãºã ãå¿
èŠã§ãã
æ®å¿µãªãããŸã ãªãŒãã³ãœãŒã¹åãããŠããŸããã誰ããå®è£
ããããšãããšãã³ãã¥ããã£ã«ãšã£ãŠéåžžã«åœ¹ç«ã¡ãŸãã
BORT ã®åŸ®èª¿æŽãæ©èœãããããã®ã¢ã«ãŽãªãºã ã | 0 |
hf_public_repos/transformers/docs/source/ja | hf_public_repos/transformers/docs/source/ja/model_doc/big_bird.md | <!--Copyright 2021 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# BigBird
## Overview
BigBird ã¢ãã«ã¯ã[Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) ã§ææ¡ãããŸããã
ã¶ããŒã«ããã³ãžã«ãšã°ã«ã¬ãã·ã¥ãã°ã«ãšããã€ãã¯ããŒã«ã»ã¢ãŽã£ããŽã¡ãšãšã€ã³ãºãªãŒããžã§ã·ã¥ã¢ãšã¢ã«ãã«ãã£ãã¯ãªã¹ãšãªã³ã¿ãã³ã
ãµã³ãã£ã¢ãŽãšãã¡ã ããã£ãªãããšã©ãã©ãã¢ãã«ãŒããšã¯ã³ãããŒãã¡ã³ãšã€ã³ããªãŒãªã©ã BigBird ã¯æ³šç®åºŠãäœã
BERT ãªã©ã® Transformer ããŒã¹ã®ã¢ãã«ãããã«é·ãã·ãŒã±ã³ã¹ã«æ¡åŒµãããTransformer ããŒã¹ã®ã¢ãã«ããŸã°ãã«å ããŠ
ã¢ãã³ã·ã§ã³ãšåæ§ã«ãBigBird ã¯å
¥åã·ãŒã±ã³ã¹ã«ã©ã³ãã ã¢ãã³ã·ã§ã³ã ãã§ãªãã°ããŒãã« ã¢ãã³ã·ã§ã³ãé©çšããŸããçè«çã«ã¯ã
ãŸã°ãã§å
šäœçã§ã©ã³ãã ãªæ³šæãé©çšãããšãå®å
šãªæ³šæã«è¿ã¥ãããšã瀺ãããŠããŸããã
é·ãã·ãŒã±ã³ã¹ã§ã¯èšç®å¹çã倧å¹
ã«åäžããŸããããé·ãã³ã³ããã¹ããåŠçã§ããæ©èœã®çµæãšããŠã
BigBird ã¯ã質åå¿çã
BERT ãŸã㯠RoBERTa ãšæ¯èŒããèŠçŽã
è«æã®èŠçŽã¯æ¬¡ã®ãšããã§ãã
*BERT ãªã©ã®ãã©ã³ã¹ãã©ãŒããŒããŒã¹ã®ã¢ãã«ã¯ãNLP ã§æãæåããæ·±å±€åŠç¿ã¢ãã«ã® 1 ã€ã§ãã
æ®å¿µãªããããããã®äžæ žçãªå¶éã® 1 ã€ã¯ãã·ãŒã±ã³ã¹ã«å¯Ÿããäºæ¬¡äŸåæ§ (äž»ã«ã¡ã¢ãªã«é¢ãã) ã§ãã
å®å
šãªæ³šæã¡ã«ããºã ã«ããé·ãã§ããããã解決ããããã«ãBigBird ã¯ããŸã°ããªæ³šæã¡ã«ããºã ãææ¡ããŸãã
ãã®äºæ¬¡äŸåé¢ä¿ãç·åœ¢ã«åæžããŸãã BigBird ãã·ãŒã±ã³ã¹é¢æ°ã®æ±çšè¿äŒŒåšã§ããããšã瀺ããŸãã
ãã¥ãŒãªã³ã°ã¯å®å
šã§ãããããäºæ¬¡å®å
šæ³šæã¢ãã«ã®ãããã®ç¹æ§ãä¿åãããŸããéäžãç§ãã¡ã®
çè«åæã«ãããO(1) åã®ã°ããŒãã« ããŒã¯ã³ (CLS ãªã©) ãæã€å©ç¹ã®äžéšãæããã«ãªãã
ã¹ããŒã¹æ³šæã¡ã«ããºã ã®äžéšãšããŠã®ã·ãŒã±ã³ã¹ãææ¡ãããã¹ããŒã¹ ã¢ãã³ã·ã§ã³ã¯ã次ã®é·ãã®ã·ãŒã±ã³ã¹ãåŠçã§ããŸãã
åæ§ã®ããŒããŠã§ã¢ã䜿çšããŠä»¥åã«å¯èœã§ãã£ããã®ã® 8 åãããé·ãã³ã³ããã¹ããåŠçã§ããæ©èœã®çµæãšããŠã
BigBird ã¯ã質åå¿çãèŠçŽãªã©ã®ããŸããŸãª NLP ã¿ã¹ã¯ã®ããã©ãŒãã³ã¹ã倧å¹
ã«åäžãããŸããç§éã
ã²ããã¯ã¹ããŒã¿ãžã®æ°ããã¢ããªã±ãŒã·ã§ã³ãææ¡ããŸãã*
ãããïŒ
- BigBird ã®æ³šæãã©ã®ããã«æ©èœãããã«ã€ããŠã®è©³çްãªèª¬æã«ã€ããŠã¯ã[ãã®ããã°æçš¿](https://huggingface.co/blog/big-bird) ãåç
§ããŠãã ããã
- BigBird ã«ã¯ã**original_full** ãš **block_sparse** ã® 2 ã€ã®å®è£
ãä»å±ããŠããŸããã·ãŒã±ã³ã¹é·ã 1024 æªæºã®å Žåãæ¬¡ã䜿çšããŸãã
**block_sparse** ã䜿çšããŠãã¡ãªããããªãããã**original_full** ã䜿çšããããšããå§ãããŸãã
- ã³ãŒãã¯çŸåšã3 ãããã¯ãš 2 ã°ããŒãã« ãããã¯ã®ãŠã£ã³ã㊠ãµã€ãºã䜿çšããŠããŸãã
- ã·ãŒã±ã³ã¹ã®é·ãã¯ããã㯠ãµã€ãºã§å²ãåããå¿
èŠããããŸãã
- çŸåšã®å®è£
ã§ã¯ **ITC** ã®ã¿ããµããŒããããŠããŸãã
- çŸåšã®å®è£
ã§ã¯ **num_random_blocks = 0** ã¯ãµããŒããããŠããŸãã
- BigBird ã¯çµ¶å¯Ÿäœçœ®åã蟌ã¿ãåããã¢ãã«ã§ãããããéåžžã¯å
¥åãå³åŽã«ããã£ã³ã°ããããšããå§ãããŸãã
å·Šã
ãã®ã¢ãã«ã¯ã[vasudevgupta](https://huggingface.co/vasudevgupta) ã«ãã£ãŠæäŸãããŸãããå
ã®ã³ãŒããèŠã€ãã
[ãã¡ã](https://github.com/google-research/bigbird)ã
## ããã¥ã¡ã³ã ãªãœãŒã¹
- [ããã¹ãåé¡ã¿ã¹ã¯ã¬ã€ã](../tasks/sequence_classification)
- [ããŒã¯ã³åé¡ã¿ã¹ã¯ã¬ã€ã](../tasks/token_classification)
- [質ååçã¿ã¹ã¯ ã¬ã€ã](../tasks/question_answering)
- [å æèšèªã¢ããªã³ã° ã¿ã¹ã¯ ã¬ã€ã](../tasks/language_modeling)
- [ãã¹ã¯ãããèšèªã¢ããªã³ã° ã¿ã¹ã¯ ã¬ã€ã](../tasks/masked_lang_modeling)
- [å€è¢éžæã¿ã¹ã¯ ã¬ã€ã](../tasks/multiple_choice)
## BigBirdConfig
[[autodoc]] BigBirdConfig
## BigBirdTokenizer
[[autodoc]] BigBirdTokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
## BigBirdTokenizerFast
[[autodoc]] BigBirdTokenizerFast
## BigBird specific outputs
[[autodoc]] models.big_bird.modeling_big_bird.BigBirdForPreTrainingOutput
<frameworkcontent>
<pt>
## BigBirdModel
[[autodoc]] BigBirdModel
- forward
## BigBirdForPreTraining
[[autodoc]] BigBirdForPreTraining
- forward
## BigBirdForCausalLM
[[autodoc]] BigBirdForCausalLM
- forward
## BigBirdForMaskedLM
[[autodoc]] BigBirdForMaskedLM
- forward
## BigBirdForSequenceClassification
[[autodoc]] BigBirdForSequenceClassification
- forward
## BigBirdForMultipleChoice
[[autodoc]] BigBirdForMultipleChoice
- forward
## BigBirdForTokenClassification
[[autodoc]] BigBirdForTokenClassification
- forward
## BigBirdForQuestionAnswering
[[autodoc]] BigBirdForQuestionAnswering
- forward
</pt>
<jax>
## FlaxBigBirdModel
[[autodoc]] FlaxBigBirdModel
- __call__
## FlaxBigBirdForPreTraining
[[autodoc]] FlaxBigBirdForPreTraining
- __call__
## FlaxBigBirdForCausalLM
[[autodoc]] FlaxBigBirdForCausalLM
- __call__
## FlaxBigBirdForMaskedLM
[[autodoc]] FlaxBigBirdForMaskedLM
- __call__
## FlaxBigBirdForSequenceClassification
[[autodoc]] FlaxBigBirdForSequenceClassification
- __call__
## FlaxBigBirdForMultipleChoice
[[autodoc]] FlaxBigBirdForMultipleChoice
- __call__
## FlaxBigBirdForTokenClassification
[[autodoc]] FlaxBigBirdForTokenClassification
- __call__
## FlaxBigBirdForQuestionAnswering
[[autodoc]] FlaxBigBirdForQuestionAnswering
- __call__
</jax>
</frameworkcontent>
| 0 |
hf_public_repos/transformers/docs/source/ja | hf_public_repos/transformers/docs/source/ja/model_doc/blenderbot-small.md | <!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Blenderbot Small
[`BlenderbotSmallModel`] ãš
[`BlenderbotSmallForConditionalGeneration`] ã¯ãã§ãã¯ãã€ã³ããšçµã¿åãããŠã®ã¿äœ¿çšãããŸã
[facebook/blenderbot-90M](https://huggingface.co/facebook/blenderbot-90M)ãããå€§èŠæš¡ãª Blenderbot ãã§ãã¯ãã€ã³ãã¯ã
代ããã« [`BlenderbotModel`] ãšãšãã«äœ¿çšããŠãã ããã
[`BlenderbotForConditionalGeneration`]
## Overview
Blender ãã£ããããã ã¢ãã«ã¯ã[Recipes for building an open-domain chatbot](https://arxiv.org/pdf/2004.13637.pdf) Stephen RollerãEmily DinanãNaman GoyalãDa JuãMary Williamsonãyinghan Liuãã§ææ¡ãããŸããã
ãžã³ã»ã·ã¥ãŒããã€ã«ã»ãªãããã«ãŒãã»ã·ã£ã¹ã¿ãŒããšãªãã¯ã»Mã»ã¹ãã¹ãY-ã©ã³ã»ããŒããŒããžã§ã€ãœã³ã»ãŠã§ã¹ãã³ã2020幎4æ30æ¥ã
è«æã®èŠæšã¯æ¬¡ã®ãšããã§ãã
*ãªãŒãã³ãã¡ã€ã³ã®ãã£ãããããã®æ§ç¯ã¯ãæ©æ¢°åŠç¿ç ç©¶ã«ãšã£ãŠé£ããåéã§ãããããŸã§ã®ç ç©¶ã§ã¯æ¬¡ã®ããšã瀺ãããŠããŸããã
ãã¥ãŒã©ã« ã¢ãã«ããã©ã¡ãŒã¿ãŒã®æ°ãšãã¬ãŒãã³ã°å¯Ÿè±¡ã®ããŒã¿ã®ãµã€ãºã§ã¹ã±ãŒãªã³ã°ãããšãçµæãåäžããŸãã
髿§èœã®ãã£ãããããã«ã¯ä»ã®èŠçŽ ãéèŠã§ããããšã瀺ããŸããè¯ãäŒè©±ã«ã¯å€ãã®ããšãå¿
èŠã§ã
äŒè©±ã®å°éå®¶ãã·ãŒã ã¬ã¹ã«èåããã¹ãã«: é
åçãªè©±ã®ãã€ã³ããæäŸãã話ãèã
äžè²«ããæ
床ãç¶æããªãããç¥èãå
±æãåæ§ãé©åã«è¡šçŸãã
ãã«ãœããé©åãªãã¬ãŒãã³ã° ããŒã¿ãšéžæãäžããããå Žåãå€§èŠæš¡ã¢ãã«ããããã®ã¹ãã«ãåŠç¿ã§ããããšã瀺ããŸãã
äžä»£æŠç¥ã 90Mã2.7Bã9.4B ãã©ã¡ãŒã¿ãŒ ã¢ãã«ã䜿çšããŠãããã®ã¬ã·ãã®ããªã¢ã³ããæ§ç¯ããã¢ãã«ãäœæããŸãã
ã³ãŒãã¯å
¬éãããŠããŸãã人éã«ããè©äŸ¡ã§ã¯ãåœç€Ÿã®æè¯ã®ã¢ãã«ãæ¢åã®ã¢ãããŒããããåªããŠããããšããã«ãã¿ãŒã³ã§ç€ºãããŠããŸã
é
åãšäººéæ§ã®æž¬å®ãšãã芳ç¹ããã®å¯Ÿè©±ã次ã«ãåæã«ãã£ãŠãã®äœæ¥ã®éçã«ã€ããŠèª¬æããŸãã
åŒç€Ÿæ©çš®ã®æ
éäºäŸ*
ãããïŒ
- Blenderbot Small ã¯çµ¶å¯Ÿäœçœ®åã蟌ã¿ãåããã¢ãã«ãªã®ã§ãéåžžã¯å
¥åãå³åŽã«ããã£ã³ã°ããããšããå§ãããŸãã
å·Šã
ãã®ã¢ãã«ã¯ã[patrickvonplaten](https://huggingface.co/patrickvonplaten) ã«ãã£ãŠæäŸãããŸãããèè
ã®ã³ãŒãã¯æ¬¡ã®ãšããã§ã
[ãã](https://github.com/facebookresearch/ParlAI) ãã芧ãã ããã
## Documentation resources
- [å æèšèªã¢ããªã³ã° ã¿ã¹ã¯ ã¬ã€ã](../tasks/language_modeling)
- [翻蚳ã¿ã¹ã¯ã¬ã€ã](../tasks/translation)
- [èŠçŽã¿ã¹ã¯ã¬ã€ã](../tasks/summarization)
## BlenderbotSmallConfig
[[autodoc]] BlenderbotSmallConfig
## BlenderbotSmallTokenizer
[[autodoc]] BlenderbotSmallTokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
## BlenderbotSmallTokenizerFast
[[autodoc]] BlenderbotSmallTokenizerFast
## BlenderbotSmallModel
[[autodoc]] BlenderbotSmallModel
- forward
## BlenderbotSmallForConditionalGeneration
[[autodoc]] BlenderbotSmallForConditionalGeneration
- forward
## BlenderbotSmallForCausalLM
[[autodoc]] BlenderbotSmallForCausalLM
- forward
## TFBlenderbotSmallModel
[[autodoc]] TFBlenderbotSmallModel
- call
## TFBlenderbotSmallForConditionalGeneration
[[autodoc]] TFBlenderbotSmallForConditionalGeneration
- call
## FlaxBlenderbotSmallModel
[[autodoc]] FlaxBlenderbotSmallModel
- __call__
- encode
- decode
## FlaxBlenderbotForConditionalGeneration
[[autodoc]] FlaxBlenderbotSmallForConditionalGeneration
- __call__
- encode
- decode
| 0 |
hf_public_repos/transformers/docs/source/ja | hf_public_repos/transformers/docs/source/ja/model_doc/bloom.md | <!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# BLOOM
## Overview
BLOOM ã¢ãã«ã¯ã[BigScience Workshop](https://bigscience.huggingface.co/) ãéããŠããŸããŸãªããŒãžã§ã³ã§ææ¡ãããŠããŸãã BigScience ã¯ãç ç©¶è
ãæéãšãªãœãŒã¹ãããŒã«ããŠå
±åã§ããé«ã广ãéæããä»ã®ãªãŒãã³ ãµã€ãšã³ã¹ ã€ãã·ã¢ããããã€ã³ã¹ãã¬ãŒã·ã§ã³ãåŸãŠããŸãã
BLOOM ã®ã¢ãŒããã¯ãã£ã¯åºæ¬çã« GPT3 (次ã®ããŒã¯ã³äºæž¬ã®ããã®èªå·±ååž°ã¢ãã«) ã«äŒŒãŠããŸããã46 ã®ç°ãªãèšèªãš 13 ã®ããã°ã©ãã³ã°èšèªã§ãã¬ãŒãã³ã°ãããŠããŸãã
ã¢ãã«ã®ããã€ãã®å°ããããŒãžã§ã³ãåãããŒã¿ã»ããã§ãã¬ãŒãã³ã°ãããŠããŸãã BLOOM ã¯æ¬¡ã®ããŒãžã§ã³ã§å©çšã§ããŸãã
- [bloom-560m](https://huggingface.co/bigscience/bloom-560m)
- [bloom-1b1](https://huggingface.co/bigscience/bloom-1b1)
- [bloom-1b7](https://huggingface.co/bigscience/bloom-1b7)
- [bloom-3b](https://huggingface.co/bigscience/bloom-3b)
- [bloom-7b1](https://huggingface.co/bigscience/bloom-7b1)
- [bloom](https://huggingface.co/bigscience/bloom) (176B parameters)
## Resources
BLOOM ã䜿ãå§ããã®ã«åœ¹ç«ã€å
¬åŒ Hugging Face ããã³ã³ãã¥ãã㣠(ð ã§ç€ºãããŠãã) ãªãœãŒã¹ã®ãªã¹ããããã«å«ãããªãœãŒã¹ã®éä¿¡ã«èå³ãããå Žåã¯ããæ°è»œã«ãã« ãªã¯ãšã¹ããéããŠãã ããã審æ»ãããŠããã ããŸãããªãœãŒã¹ã¯ãæ¢åã®ãªãœãŒã¹ãè€è£œããã®ã§ã¯ãªããäœãæ°ãããã®ã瀺ãããšãçæ³çã§ãã
<PipelineTag pipeline="text-generation"/>
- [`BloomForCausalLM`] ããã«ãã£ãŠãµããŒããããŠããŸã [causal language modeling example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling#gpt-2gpt-and-causal-language-modeling) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb).
以äžãåç
§ããŠãã ããã
- [å æèšèªã¢ããªã³ã° ã¿ã¹ã¯ ã¬ã€ã](../tasks/language_modeling)
- [ããã¹ãåé¡ã¿ã¹ã¯ã¬ã€ã](../tasks/sequence_classification)
- [ããŒã¯ã³åé¡ã¿ã¹ã¯ã¬ã€ã](../tasks/token_classification)
- [質ååçã¿ã¹ã¯ ã¬ã€ã](../tasks/question_answering)
â¡ïž æšè«
- ã«é¢ããããã° [æé©åã®è©±: ãã«ãŒã æšè«](https://huggingface.co/blog/bloom-inference-optimization)ã
- ã«é¢ããããã° [DeepSpeed ãš Accelerate ã䜿çšããä¿¡ããããªãã»ã©é«é㪠BLOOM æšè«](https://huggingface.co/blog/bloom-inference-pytorch-scripts)ã
âïžãã¬ãŒãã³ã°
- ã«é¢ããããã° [BLOOM ãã¬ãŒãã³ã°ã®èåŸã«ãããã¯ãããžãŒ](https://huggingface.co/blog/bloom-megatron-deepspeed)ã
## BloomConfig
[[autodoc]] BloomConfig
- all
## BloomTokenizerFast
[[autodoc]] BloomTokenizerFast
- all
<frameworkcontent>
<pt>
## BloomModel
[[autodoc]] BloomModel
- forward
## BloomForCausalLM
[[autodoc]] BloomForCausalLM
- forward
## BloomForSequenceClassification
[[autodoc]] BloomForSequenceClassification
- forward
## BloomForTokenClassification
[[autodoc]] BloomForTokenClassification
- forward
## BloomForQuestionAnswering
[[autodoc]] BloomForQuestionAnswering
- forward
</pt>
<jax>
## FlaxBloomModel
[[autodoc]] FlaxBloomModel
- __call__
## FlaxBloomForCausalLM
[[autodoc]] FlaxBloomForCausalLM
- __call__
</jax>
</frameworkcontent>
| 0 |
hf_public_repos/transformers/docs/source/ja | hf_public_repos/transformers/docs/source/ja/model_doc/auto.md | <!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Auto Classes
å€ãã®å Žåã`from_pretrained()`ã¡ãœããã«äžããããäºååŠç¿æžã¿ã¢ãã«ã®ååããã¹ããã䜿çšãããã¢ãŒããã¯ãã£ãæšæž¬ããããšãã§ããŸããèªåã¯ã©ã¹ã¯ãã®ä»äºãããªãã«ä»£ãã£ãŠè¡ãããã«ããã«ãããŸãã®ã§ãäºååŠç¿æžã¿ã®éã¿/èšå®/èªåœãžã®åå/ãã¹ãäžãããšèªåçã«é¢é£ããã¢ãã«ãååŸã§ããŸãã
[`AutoConfig`]ã[`AutoModel`]ã[`AutoTokenizer`]ã®ãããããã€ã³ã¹ã¿ã³ã¹åãããšãé¢é£ããã¢ãŒããã¯ãã£ã®ã¯ã©ã¹ãçŽæ¥äœæãããŸããäŸãã°ã
```python
model = AutoModel.from_pretrained("bert-base-cased")
```
ããã¯[`BertModel`]ã®ã€ã³ã¹ã¿ã³ã¹ã§ããã¢ãã«ãäœæããŸãã
åã¿ã¹ã¯ããšããããŠåããã¯ãšã³ãïŒPyTorchãTensorFlowããŸãã¯FlaxïŒããšã«`AutoModel`ã®ã¯ã©ã¹ãååšããŸãã
## èªåã¯ã©ã¹ã®æ¡åŒµ
ããããã®èªåã¯ã©ã¹ã«ã¯ãã«ã¹ã¿ã ã¯ã©ã¹ã§æ¡åŒµããããã®ã¡ãœããããããŸããäŸãã°ã`NewModel`ãšããã¢ãã«ã®ã«ã¹ã¿ã ã¯ã©ã¹ãå®çŸ©ããå Žåã`NewModelConfig`ã確ä¿ããŠããã°ãã®ããã«ããŠèªåã¯ã©ã¹ã«è¿œå ããããšãã§ããŸãïŒ
```python
from transformers import AutoConfig, AutoModel
AutoConfig.register("new-model", NewModelConfig)
AutoModel.register(NewModelConfig, NewModel)
```
ãã®åŸãéåžžã©ããauto classesã䜿çšããããšãã§ããããã«ãªããŸãïŒ
<Tip warning={true}>
ããªãã®`NewModelConfig`ã[`~transformers.PretrainedConfig`]ã®ãµãã¯ã©ã¹ã§ããå Žåããã®`model_type`屿§ãã³ã³ãã£ã°ãç»é²ãããšãã«äœ¿çšããããŒïŒããã§ã¯`"new-model"`ïŒãšåãã«èšå®ãããŠããããšã確èªããŠãã ããã
åæ§ã«ãããªãã®`NewModel`ã[`PreTrainedModel`]ã®ãµãã¯ã©ã¹ã§ããå Žåããã®`config_class`屿§ãã¢ãã«ãç»é²ããéã«äœ¿çšããã¯ã©ã¹ïŒããã§ã¯`NewModelConfig`ïŒãšåãã«èšå®ãããŠããããšã確èªããŠãã ããã
</Tip>
## AutoConfig
[[autodoc]] AutoConfig
## AutoTokenizer
[[autodoc]] AutoTokenizer
## AutoFeatureExtractor
[[autodoc]] AutoFeatureExtractor
## AutoImageProcessor
[[autodoc]] AutoImageProcessor
## AutoProcessor
[[autodoc]] AutoProcessor
## Generic model classes
以äžã®èªåã¯ã©ã¹ã¯ãç¹å®ã®ããããæããªãããŒã¹ã¢ãã«ã¯ã©ã¹ãã€ã³ã¹ã¿ã³ã¹åããããã«å©çšå¯èœã§ãã
### AutoModel
[[autodoc]] AutoModel
### TFAutoModel
[[autodoc]] TFAutoModel
### FlaxAutoModel
[[autodoc]] FlaxAutoModel
## Generic pretraining classes
以äžã®èªåã¯ã©ã¹ã¯ãäºååŠç¿ããããæã€ã¢ãã«ãã€ã³ã¹ã¿ã³ã¹åããããã«å©çšå¯èœã§ãã
### AutoModelForPreTraining
[[autodoc]] AutoModelForPreTraining
### TFAutoModelForPreTraining
[[autodoc]] TFAutoModelForPreTraining
### FlaxAutoModelForPreTraining
[[autodoc]] FlaxAutoModelForPreTraining
## Natural Language Processing
以äžã®èªåã¯ã©ã¹ã¯ã次ã®èªç¶èšèªåŠçã¿ã¹ã¯ã«å©çšå¯èœã§ãã
### AutoModelForCausalLM
[[autodoc]] AutoModelForCausalLM
### TFAutoModelForCausalLM
[[autodoc]] TFAutoModelForCausalLM
### FlaxAutoModelForCausalLM
[[autodoc]] FlaxAutoModelForCausalLM
### AutoModelForMaskedLM
[[autodoc]] AutoModelForMaskedLM
### TFAutoModelForMaskedLM
[[autodoc]] TFAutoModelForMaskedLM
### FlaxAutoModelForMaskedLM
[[autodoc]] FlaxAutoModelForMaskedLM
### AutoModelForMaskGeneration
[[autodoc]] AutoModelForMaskGeneration
### TFAutoModelForMaskGeneration
[[autodoc]] TFAutoModelForMaskGeneration
### AutoModelForSeq2SeqLM
[[autodoc]] AutoModelForSeq2SeqLM
### TFAutoModelForSeq2SeqLM
[[autodoc]] TFAutoModelForSeq2SeqLM
### FlaxAutoModelForSeq2SeqLM
[[autodoc]] FlaxAutoModelForSeq2SeqLM
### AutoModelForSequenceClassification
[[autodoc]] AutoModelForSequenceClassification
### TFAutoModelForSequenceClassification
[[autodoc]] TFAutoModelForSequenceClassification
### FlaxAutoModelForSequenceClassification
[[autodoc]] FlaxAutoModelForSequenceClassification
### AutoModelForMultipleChoice
[[autodoc]] AutoModelForMultipleChoice
### TFAutoModelForMultipleChoice
[[autodoc]] TFAutoModelForMultipleChoice
### FlaxAutoModelForMultipleChoice
[[autodoc]] FlaxAutoModelForMultipleChoice
### AutoModelForNextSentencePrediction
[[autodoc]] AutoModelForNextSentencePrediction
### TFAutoModelForNextSentencePrediction
[[autodoc]] TFAutoModelForNextSentencePrediction
### FlaxAutoModelForNextSentencePrediction
[[autodoc]] FlaxAutoModelForNextSentencePrediction
### AutoModelForTokenClassification
[[autodoc]] AutoModelForTokenClassification
### TFAutoModelForTokenClassification
[[autodoc]] TFAutoModelForTokenClassification
### FlaxAutoModelForTokenClassification
[[autodoc]] FlaxAutoModelForTokenClassification
### AutoModelForQuestionAnswering
[[autodoc]] AutoModelForQuestionAnswering
### TFAutoModelForQuestionAnswering
[[autodoc]] TFAutoModelForQuestionAnswering
### FlaxAutoModelForQuestionAnswering
[[autodoc]] FlaxAutoModelForQuestionAnswering
### AutoModelForTextEncoding
[[autodoc]] AutoModelForTextEncoding
### TFAutoModelForTextEncoding
[[autodoc]] TFAutoModelForTextEncoding
## Computer vision
以äžã®èªåã¯ã©ã¹ã¯ã次ã®ã³ã³ãã¥ãŒã¿ãŒããžã§ã³ã¿ã¹ã¯ã«å©çšå¯èœã§ãã
### AutoModelForDepthEstimation
[[autodoc]] AutoModelForDepthEstimation
### AutoModelForImageClassification
[[autodoc]] AutoModelForImageClassification
### TFAutoModelForImageClassification
[[autodoc]] TFAutoModelForImageClassification
### FlaxAutoModelForImageClassification
[[autodoc]] FlaxAutoModelForImageClassification
### AutoModelForVideoClassification
[[autodoc]] AutoModelForVideoClassification
### AutoModelForMaskedImageModeling
[[autodoc]] AutoModelForMaskedImageModeling
### TFAutoModelForMaskedImageModeling
[[autodoc]] TFAutoModelForMaskedImageModeling
### AutoModelForObjectDetection
[[autodoc]] AutoModelForObjectDetection
### AutoModelForImageSegmentation
[[autodoc]] AutoModelForImageSegmentation
### AutoModelForImageToImage
[[autodoc]] AutoModelForImageToImage
### AutoModelForSemanticSegmentation
[[autodoc]] AutoModelForSemanticSegmentation
### TFAutoModelForSemanticSegmentation
[[autodoc]] TFAutoModelForSemanticSegmentation
### AutoModelForInstanceSegmentation
[[autodoc]] AutoModelForInstanceSegmentation
### AutoModelForUniversalSegmentation
[[autodoc]] AutoModelForUniversalSegmentation
### AutoModelForZeroShotImageClassification
[[autodoc]] AutoModelForZeroShotImageClassification
### TFAutoModelForZeroShotImageClassification
[[autodoc]] TFAutoModelForZeroShotImageClassification
### AutoModelForZeroShotObjectDetection
[[autodoc]] AutoModelForZeroShotObjectDetection
## Audio
以äžã®èªåã¯ã©ã¹ã¯ã次ã®é³å£°ã¿ã¹ã¯ã«å©çšå¯èœã§ãã
### AutoModelForAudioClassification
[[autodoc]] AutoModelForAudioClassification
### AutoModelForAudioFrameClassification
[[autodoc]] TFAutoModelForAudioClassification
### TFAutoModelForAudioFrameClassification
[[autodoc]] AutoModelForAudioFrameClassification
### AutoModelForCTC
[[autodoc]] AutoModelForCTC
### AutoModelForSpeechSeq2Seq
[[autodoc]] AutoModelForSpeechSeq2Seq
### TFAutoModelForSpeechSeq2Seq
[[autodoc]] TFAutoModelForSpeechSeq2Seq
### FlaxAutoModelForSpeechSeq2Seq
[[autodoc]] FlaxAutoModelForSpeechSeq2Seq
### AutoModelForAudioXVector
[[autodoc]] AutoModelForAudioXVector
### AutoModelForTextToSpectrogram
[[autodoc]] AutoModelForTextToSpectrogram
### AutoModelForTextToWaveform
[[autodoc]] AutoModelForTextToWaveform
## Multimodal
以äžã®èªåã¯ã©ã¹ã¯ã次ã®ãã«ãã¢ãŒãã«ã¿ã¹ã¯ã«å©çšå¯èœã§ãã
### AutoModelForTableQuestionAnswering
[[autodoc]] AutoModelForTableQuestionAnswering
### TFAutoModelForTableQuestionAnswering
[[autodoc]] TFAutoModelForTableQuestionAnswering
### AutoModelForDocumentQuestionAnswering
[[autodoc]] AutoModelForDocumentQuestionAnswering
### TFAutoModelForDocumentQuestionAnswering
[[autodoc]] TFAutoModelForDocumentQuestionAnswering
### AutoModelForVisualQuestionAnswering
[[autodoc]] AutoModelForVisualQuestionAnswering
### AutoModelForVision2Seq
[[autodoc]] AutoModelForVision2Seq
### TFAutoModelForVision2Seq
[[autodoc]] TFAutoModelForVision2Seq
### FlaxAutoModelForVision2Seq
[[autodoc]] FlaxAutoModelForVision2Seq
| 0 |
hf_public_repos/transformers/docs/source/ja | hf_public_repos/transformers/docs/source/ja/model_doc/convnext.md | <!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# ConvNeXT
## Overview
ConvNeXT ã¢ãã«ã¯ã[A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) ã§ Zhuang LiuãHanzi MaoãChao-Yuan WuãChristoph FeichtenhoferãTrevor DarrellãSaining Xie ã«ãã£ãŠææ¡ãããŸããã
ConvNeXT ã¯ãããžã§ã³ ãã©ã³ã¹ãã©ãŒããŒã®èšèšããã€ã³ã¹ãã¬ãŒã·ã§ã³ãåŸãçŽç²ãªç³ã¿èŸŒã¿ã¢ãã« (ConvNet) ã§ãããããžã§ã³ ãã©ã³ã¹ãã©ãŒããŒãããåªããããã©ãŒãã³ã¹ãçºæ®ãããšäž»åŒµããŠããŸãã
è«æã®èŠçŽã¯æ¬¡ã®ãšããã§ãã
*èŠèŠèªèã®ãçéšã® 20 幎代ãã¯ãæå
端ã®ç»ååé¡ã¢ãã«ãšã㊠ConvNet ã«ããã«åã£ãŠä»£ãããã Vision Transformers (ViT) ã®å°å
¥ããå§ãŸããŸããã
äžæ¹ãããã© ViT ã¯ããªããžã§ã¯ãæ€åºãã»ãã³ãã£ã㯠ã»ã°ã¡ã³ããŒã·ã§ã³ãªã©ã®äžè¬çãªã³ã³ãã¥ãŒã¿ãŒ ããžã§ã³ ã¿ã¹ã¯ã«é©çšãããšå°é£ã«çŽé¢ããŸããéå±€åãã©ã³ã¹ãã©ãŒããŒã§ã
(Swin Transformers ãªã©) ã¯ãããã€ãã® ConvNet ã®ä»¥åã®æ©èœãåå°å
¥ããTransformers ãæ±çšããžã§ã³ ããã¯ããŒã³ãšããŠå®çšçã«å¯èœã«ããå¹
åºãç°å¢ã§é¡èãªããã©ãŒãã³ã¹ãå®èšŒããŸããã
ããŸããŸãªèŠèŠã¿ã¹ã¯ããã ãããã®ãããªãã€ããªãã ã¢ãããŒãã®æå¹æ§ã¯ãäŸç¶ãšããŠãåºæã®èªå°æ§ã§ã¯ãªãããã©ã³ã¹ãã©ãŒããŒã®æ¬è³ªçãªåªäœæ§ã«ãããšããã倧ãããšèããããŠããŸãã
ç³ã¿èŸŒã¿ã®ãã€ã¢ã¹ããã®äœæ¥ã§ã¯ãèšèšç©ºéã忀èšããçŽç²ãª ConvNet ãéæã§ããéçããã¹ãããŸããæšæº ResNet ãèšèšã«åããŠåŸã
ã«ãææ°åãããŸãã
ããžã§ã³ Transformer ã®æŠèŠã確èªããéäžã§ããã©ãŒãã³ã¹ã®éãã«å¯äžããããã€ãã®éèŠãªã³ã³ããŒãã³ããçºèŠããŸãããã®èª¿æ»ã®çµæã¯ãçŽç²ãª ConvNet ã¢ãã«ã®ãã¡ããªãŒã§ãã
ConvNextãšåŒã°ããŸãã ConvNeXts ã¯å®å
šã«æšæºã® ConvNet ã¢ãžã¥ãŒã«ããæ§ç¯ãããŠãããç²ŸåºŠãšæ¡åŒµæ§ã®ç¹ã§ Transformers ãšæå©ã«ç«¶åãã87.8% ã® ImageNet ããã 1 粟床ãéæããŠããŸãã
æšæº ConvNet ã®ã·ã³ãã«ããšå¹çãç¶æããªãããCOCO æ€åºãš ADE20K ã»ã°ã¡ã³ããŒã·ã§ã³ã§ã¯ Swin Transformers ãããåªããããã©ãŒãã³ã¹ãçºæ®ããŸãã*
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/convnext_architecture.jpg"
alt="æç»" width="600"/>
<small> ConvNeXT ã¢ãŒããã¯ãã£ã <a href="https://arxiv.org/abs/2201.03545">å
ã®è«æ</a>ããæç²ã</small>
ãã®ã¢ãã«ã¯ã[nielsr](https://huggingface.co/nielsr) ã«ãã£ãŠæäŸãããŸããã TensorFlow ããŒãžã§ã³ã®ã¢ãã«ã¯ [ariG23498](https://github.com/ariG23498) ã«ãã£ãŠæäŸãããŸããã
[gante](https://github.com/gante)ãããã³ [sayakpaul](https://github.com/sayakpaul) (åçã®è²¢ç®)ãå
ã®ã³ãŒã㯠[ãã¡ã](https://github.com/facebookresearch/ConvNeXt) ã«ãããŸãã
## Resources
ConvNeXT ã®äœ¿çšãéå§ããã®ã«åœ¹ç«ã€å
¬åŒ Hugging Face ããã³ã³ãã¥ãã㣠(ð ã§ç€ºããã) ãªãœãŒã¹ã®ãªã¹ãã
<PipelineTag pipeline="image-classification"/>
- [`ConvNextForImageClassification`] ã¯ããã® [ãµã³ãã« ã¹ã¯ãªãã](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) ããã³ [ããŒãããã¯](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb)ã
- åç
§: [ç»ååé¡ã¿ã¹ã¯ ã¬ã€ã](../tasks/image_classification)
ããã«å«ãããªãœãŒã¹ã®éä¿¡ã«èå³ãããå Žåã¯ããæ°è»œã«ãã« ãªã¯ãšã¹ããéããŠãã ããã審æ»ãããŠããã ããŸãããªãœãŒã¹ã¯ãæ¢åã®ãªãœãŒã¹ãè€è£œããã®ã§ã¯ãªããäœãæ°ãããã®ã瀺ãããšãçæ³çã§ãã
## ConvNextConfig
[[autodoc]] ConvNextConfig
## ConvNextFeatureExtractor
[[autodoc]] ConvNextFeatureExtractor
## ConvNextImageProcessor
[[autodoc]] ConvNextImageProcessor
- preprocess
<frameworkcontent>
<pt>
## ConvNextModel
[[autodoc]] ConvNextModel
- forward
## ConvNextForImageClassification
[[autodoc]] ConvNextForImageClassification
- forward
</pt>
<tf>
## TFConvNextModel
[[autodoc]] TFConvNextModel
- call
## TFConvNextForImageClassification
[[autodoc]] TFConvNextForImageClassification
- call
</tf>
</frameworkcontent> | 0 |
hf_public_repos/transformers/docs/source/ja | hf_public_repos/transformers/docs/source/ja/model_doc/audio-spectrogram-transformer.md | <!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Audio Spectrogram Transformer
## æŠèŠ
Audio Spectrogram Transformerã¢ãã«ã¯ã[AST: Audio Spectrogram Transformer](https://arxiv.org/abs/2104.01778)ãšããè«æã§Yuan GongãYu-An ChungãJames Glassã«ãã£ãŠææ¡ãããŸãããããã¯ãé³å£°ãç»åïŒã¹ãã¯ããã°ã©ã ïŒã«å€æããããšã§ãé³å£°ã«[Vision Transformer](vit)ãé©çšããŸãããã®ã¢ãã«ã¯é³å£°åé¡ã«ãããŠæå
端ã®çµæãåŸãŠããŸãã
è«æã®èŠæšã¯ä»¥äžã®éãã§ãïŒ
*éå»10幎éã§ãç³ã¿èŸŒã¿ãã¥ãŒã©ã«ãããã¯ãŒã¯ïŒCNNïŒã¯ãé³å£°ã¹ãã¯ããã°ã©ã ãã察å¿ããã©ãã«ãžã®çŽæ¥çãªãããã³ã°ãåŠç¿ããããšãç®æãããšã³ãããŒãšã³ãã®é³å£°åé¡ã¢ãã«ã®äž»èŠãªæ§æèŠçŽ ãšããŠåºãæ¡çšãããŠããŸãããé·è·é¢ã®ã°ããŒãã«ãªã³ã³ããã¹ããããè¯ãæãããããæè¿ã®åŸåãšããŠãCNNã®äžã«ã»ã«ãã¢ãã³ã·ã§ã³æ©æ§ã远å ããCNN-ã¢ãã³ã·ã§ã³ãã€ããªããã¢ãã«ã圢æããããšããããŸããããããCNNãžã®äŸåãå¿
èŠãã©ããããããŠçŽç²ã«ã¢ãã³ã·ã§ã³ã«åºã¥ããã¥ãŒã©ã«ãããã¯ãŒã¯ã ãã§é³å£°åé¡ã«ãããŠè¯ãããã©ãŒãã³ã¹ãåŸãããšãã§ãããã©ããã¯æããã§ã¯ãããŸãããæ¬è«æã§ã¯ããããã®åãã«çãããããé³å£°åé¡çšã§ã¯æåã®ç³ã¿èŸŒã¿ãªãã§çŽç²ã«ã¢ãã³ã·ã§ã³ããŒã¹ã®ã¢ãã«ã§ããAudio Spectrogram TransformerïŒASTïŒã玹ä»ããŸããæã
ã¯ASTãæ§ã
ãªãªãŒãã£ãªåé¡ãã³ãããŒã¯ã§è©äŸ¡ããAudioSetã§0.485 mAPãESC-50ã§95.6%ã®æ£è§£çãSpeech Commands V2ã§98.1%ã®æ£è§£çãšããæ°ããªæå
端ã®çµæãéæããŸããã*
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/audio_spectogram_transformer_architecture.png"
alt="drawing" width="600"/>
<small> Audio Spectrogram Transformerã®ã¢ãŒããã¯ãã£ã<a href="https://arxiv.org/abs/2104.01778">å
è«æ</a>ããæç²ã</small>
ãã®ã¢ãã«ã¯[nielsr](https://huggingface.co/nielsr)ããæäŸãããŸããã
ãªãªãžãã«ã®ã³ãŒãã¯[ãã¡ã](https://github.com/YuanGongND/ast)ã§èŠãããšãã§ããŸãã
## 䜿çšäžã®ãã³ã
- ç¬èªã®ããŒã¿ã»ããã§Audio Spectrogram TransformerïŒASTïŒããã¡ã€ã³ãã¥ãŒãã³ã°ããå Žåãå
¥åã®æ£èŠåïŒå
¥åã®å¹³åã0ãæšæºåå·®ã0.5ã«ããããšïŒåŠçããããšãæšå¥šãããŸãã[`ASTFeatureExtractor`]ã¯ãããåŠçããŸããããã©ã«ãã§ã¯AudioSetã®å¹³åãšæšæºåå·®ã䜿çšããŠããããšã«æ³šæããŠãã ãããèè
ãäžæµã®ããŒã¿ã»ããã®çµ±èšãã©ã®ããã«èšç®ããŠãããã¯ã[`ast/src/get_norm_stats.py`](https://github.com/YuanGongND/ast/blob/master/src/get_norm_stats.py)ã§ç¢ºèªããããšãã§ããŸãã
- ASTã¯äœãåŠç¿çãå¿
èŠã§ãã èè
ã¯[PSLAè«æ](https://arxiv.org/abs/2102.01243)ã§ææ¡ãããCNNã¢ãã«ã«æ¯ã¹ãŠ10åå°ããåŠç¿çã䜿çšããŠããŸãïŒãçŽ æ©ãåæãããããã¿ã¹ã¯ã«é©ããåŠç¿çãšåŠç¿çã¹ã±ãžã¥ãŒã©ãŒãæ¢ãããšããå§ãããŸãã
## åèè³æ
Audio Spectrogram Transformerã®äœ¿çšãéå§ããã®ã«åœ¹ç«ã€å
¬åŒã®Hugging Faceããã³ã³ãã¥ããã£ïŒðã§ç€ºãããŠããïŒã®åèè³æã®äžèЧã§ãã
<PipelineTag pipeline="audio-classification"/>
- ASTãçšããé³å£°åé¡ã®æšè«ã説æããããŒãããã¯ã¯[ãã¡ã](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/AST)ã§èŠãããšãã§ããŸãã
- [`ASTForAudioClassification`]ã¯ããã®[äŸç€ºã¹ã¯ãªãã](https://github.com/huggingface/transformers/tree/main/examples/pytorch/audio-classification)ãš[ããŒãããã¯](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/audio_classification.ipynb)ã«ãã£ãŠãµããŒããããŠããŸãã
- ãã¡ããåç
§ïŒ[é³å£°åé¡ã¿ã¹ã¯](../tasks/audio_classification)ã
ããã«åèè³æãæåºãããå Žåã¯ãæ°å
ŒããªãPull RequestãéããŠãã ãããç§ãã¡ã¯ãããã¬ãã¥ãŒããããŸãïŒåèè³æã¯ãæ¢åã®ãã®ãè€è£œããã®ã§ã¯ãªããäœãæ°ããããšã瀺ãããšãçæ³çã§ãã
## ASTConfig
[[autodoc]] ASTConfig
## ASTFeatureExtractor
[[autodoc]] ASTFeatureExtractor
- __call__
## ASTModel
[[autodoc]] ASTModel
- forward
## ASTForAudioClassification
[[autodoc]] ASTForAudioClassification
- forward
| 0 |
hf_public_repos/transformers/docs/source/ja | hf_public_repos/transformers/docs/source/ja/model_doc/codegen.md | <!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# CodeGen
## Overview
CodeGen ã¢ãã«ã¯ã[A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) ã§ Erik NijkampãBo Pangãæå®æãLifu TuãHuan WangãYingbo ZhouãSilvio SavareseãCaiming Xiong ããã³ã«ã€ãã³ã»ã·ã§ã³ããã
CodeGen ã¯ã[The Pile](https://pile.eleuther.ai/)ãBigQueryãBigPython ã§é 次ãã¬ãŒãã³ã°ãããããã°ã©ã åæçšã®èªå·±ååž°èšèªã¢ãã«ã§ãã
è«æã®èŠçŽã¯æ¬¡ã®ãšããã§ãã
*ããã°ã©ã åæã¯ãäžããããåé¡ä»æ§ã®è§£æ±ºçãšããŠã³ã³ãã¥ãŒã¿ãŒ ããã°ã©ã ãçæããããšãç®çãšããŠããŸããæã
ã¯ãå€§èŠæš¡ãªèšèªã¢ãã«ãä»ããäŒè©±åããã°ã©ã åæã¢ãããŒããææ¡ããŸããããã¯ãåŸæ¥ã®ã¢ãããŒãã§çŽé¢ããåºå€§ãªããã°ã©ã 空éãšãŠãŒã¶ãŒã®æå³ã®ä»æ§ãæ€çŽ¢ãããšãã課é¡ã«å¯ŸåŠããŸããç§ãã¡ã®æ°ããã¢ãããŒãã§ã¯ã仿§ãšããã°ã©ã ãäœæããããã»ã¹ãããŠãŒã¶ãŒãšã·ã¹ãã ã®éã®è€æ°åã®å¯Ÿè©±ãšããŠæããŸããããã¯ããã°ã©ã åæãã·ãŒã±ã³ã¹äºæž¬åé¡ãšããŠæ±ãã仿§ãèªç¶èšèªã§è¡šçŸãããç®çã®ããã°ã©ã ãæ¡ä»¶ä»ãã§ãµã³ããªã³ã°ãããŸããç§ãã¡ã¯ãèªç¶èšèªãšããã°ã©ãã³ã°èšèªã®ããŒã¿ã«åºã¥ããŠãCodeGen ãšåŒã°ããå€§èŠæš¡ãªèšèªã¢ãã«ã®ãã¡ããªãŒããã¬ãŒãã³ã°ããŸããããŒã¿ã®ç£èŠã匱ããããŒã¿ ãµã€ãºãšã¢ãã« ãµã€ãºãæ¡å€§ãããšãåçŽãªèªå·±ååž°èšèªã¢ããªã³ã°ããäŒè©±èœåãçãŸããŸããäŒè©±åããã°ã©ã åæã«ãããã¢ãã«ã®åäœãç ç©¶ããããã«ããã«ãã¿ãŒã³ ããã°ã©ãã³ã° ãã³ãããŒã¯ (MTPB) ãéçºããŸãããã®ãã³ãããŒã¯ã§ã¯ãååé¡ã解決ããã«ã¯ããŠãŒã¶ãŒãšã¢ãã«éã®ãã«ãã¿ãŒã³äŒè©±ãä»ãããã«ãã¹ãããåæãå¿
èŠã§ããç§ãã¡ã®èª¿æ»çµæã¯ãäŒè©±æ©èœã®åºçŸãšãææ¡ãããŠããäŒè©±ããã°ã©ã åæãã©ãã€ã ã®æå¹æ§ã瀺ããŠããŸããããã«ãç§ãã¡ã®ã¢ãã« CodeGen (TPU-v4 ã§ãã¬ãŒãã³ã°ãããæå€§ 16B ãã©ã¡ãŒã¿ãŒãå«ã) ã¯ãHumanEval ãã³ãããŒã¯ã§ OpenAI ã® Codex ãäžåããŸããç§ãã¡ã¯ãã§ãã¯ãã€ã³ããå«ããã¬ãŒãã³ã° ã©ã€ãã©ãª JaxFormer ããªãŒãã³ ãœãŒã¹ã®ã³ã³ããªãã¥ãŒã·ã§ã³ãšããŠå©çšã§ããããã«ããŠããŸã: [ãã® https URL](https://github.com/salesforce/codegen)*ã
ãã®ã¢ãã«ã¯ [æ 宿](https://huggingface.co/rooa) ã«ãã£ãŠå¯çš¿ãããŸããã
å
ã®ã³ãŒã㯠[ãã](https://github.com/salesforce/codegen) ã«ãããŸãã
## Checkpoint Naming
* CodeGen ã¢ãã« [ãã§ãã¯ãã€ã³ã](https://huggingface.co/models?other=codegen) ã¯ãå¯å€ãµã€ãºã®ããŸããŸãªäºåãã¬ãŒãã³ã° ããŒã¿ã§å©çšã§ããŸãã
* 圢åŒã¯ãSalesforce/codegen-{size}-{data}ãã§ããããã§ã
* `size`: `350M`ã`2B`ã`6B`ã`16B`
* `data`:
* `nl`: ãã€ã«ã§äºåãã¬ãŒãã³ã°æžã¿
* `multi`: `nl` ã§åæåãããè€æ°ã®ããã°ã©ãã³ã°èšèªããŒã¿ã§ããã«äºåãã¬ãŒãã³ã°ãããŸãã
* `mono`: `multi` ã§åæåãããPython ããŒã¿ã§ããã«äºåãã¬ãŒãã³ã°ãããŸãã
* ããšãã°ã`Salesforce/codegen-350M-mono` ã¯ãPileãè€æ°ã®ããã°ã©ãã³ã°èšèªãããã³ Python ã§é 次äºåãã¬ãŒãã³ã°ããã 3 å 5,000 äžã®ãã©ã¡ãŒã¿ãŒã®ãã§ãã¯ãã€ã³ããæäŸããŸãã
## Usage example
```python
>>> from transformers import AutoModelForCausalLM, AutoTokenizer
>>> checkpoint = "Salesforce/codegen-350M-mono"
>>> model = AutoModelForCausalLM.from_pretrained(checkpoint)
>>> tokenizer = AutoTokenizer.from_pretrained(checkpoint)
>>> text = "def hello_world():"
>>> completion = model.generate(**tokenizer(text, return_tensors="pt"))
>>> print(tokenizer.decode(completion[0]))
def hello_world():
print("Hello World")
hello_world()
```
## Resources
- [å æèšèªã¢ããªã³ã° ã¿ã¹ã¯ ã¬ã€ã](../tasks/language_modeling)
## CodeGenConfig
[[autodoc]] CodeGenConfig
- all
## CodeGenTokenizer
[[autodoc]] CodeGenTokenizer
- save_vocabulary
## CodeGenTokenizerFast
[[autodoc]] CodeGenTokenizerFast
## CodeGenModel
[[autodoc]] CodeGenModel
- forward
## CodeGenForCausalLM
[[autodoc]] CodeGenForCausalLM
- forward
| 0 |
hf_public_repos/transformers/docs/source/ja | hf_public_repos/transformers/docs/source/ja/model_doc/convbert.md | <!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# ConvBERT
<div class="flex flex-wrap space-x-1">
<a href="https://huggingface.co/models?filter=convbert">
<img alt="Models" src="https://img.shields.io/badge/All_model_pages-convbert-blueviolet">
</a>
<a href="https://huggingface.co/spaces/docs-demos/conv-bert-base">
<img alt="Spaces" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue">
</a>
</div>
## Overview
ConvBERT ã¢ãã«ã¯ã[ConvBERT: Improving BERT with Span-based Dynamic Convolution](https://arxiv.org/abs/2008.02496) ã§ Zihang JiangãWeihao YuãDaquan ZhouãYunpeng ChenãJiashi FengãShuicheng Yan ã«ãã£ãŠææ¡ãããŸããã
ããã
è«æã®èŠçŽã¯æ¬¡ã®ãšããã§ãã
*BERT ããã®ããªã¢ã³ããªã©ã®äºåãã¬ãŒãã³ã°æžã¿èšèªã¢ãã«ã¯ãæè¿ãããŸããŸãªç°å¢ã§ç®èŠãŸããããã©ãŒãã³ã¹ãéæããŠããŸãã
èªç¶èšèªçè§£ã¿ã¹ã¯ããã ããBERT ã¯ã°ããŒãã«ãªèªå·±æ³šæãããã¯ã«å€§ããäŸåããŠãããããåé¡ãçºçããŸãã
ã¡ã¢ãªäœ¿çšéãšèšç®ã³ã¹ãã倧ãããªããŸãããã¹ãŠã®æ³šæãå
¥åã·ãŒã±ã³ã¹å
šäœã«å¯ŸããŠã¯ãšãªãå®è¡ããŸããã
ã°ããŒãã«ãªèгç¹ããã¢ãã³ã·ã§ã³ ããããçæãããšãäžéšã®ãããã¯ããŒã«ã«ãªäŸåé¢ä¿ã®ã¿ãåŠç¿ããå¿
èŠãããããšãããããŸãã
ããã¯ãèšç®ã®åé·æ§ãååšããããšãæå³ããŸãããããã£ãŠãæã
ã¯ãæ°ããã¹ãã³ããŒã¹ã®åçç³ã¿èŸŒã¿ãææ¡ããŸãã
ãããã®ã»ã«ãã¢ãã³ã·ã§ã³ ãããã眮ãæããŠãããŒã«ã«ã®äŸåé¢ä¿ãçŽæ¥ã¢ãã«åããŸããæ°ããã³ã³ããªã¥ãŒã·ã§ã³ããããšã
èªå·±æ³šæã®é ãäŒããã°ããŒãã«ãšããŒã«ã«ã®äž¡æ¹ã®ç¶æ³ã§ããå¹ççãªæ°ããæ··å泚æãããã¯ã圢æããŸã
åŠã¶ããã®æ··å泚æèšèšã BERT ã«è£
åããConvBERT ã¢ãã«ãæ§ç¯ããŸããå®éšã§ããã£ãããšã¯ã
ConvBERT ã¯ããã¬ãŒãã³ã° ã³ã¹ããäœããããŸããŸãªäžæµã¿ã¹ã¯ã«ãã㊠BERT ããã³ãã®äºçš®ããã倧å¹
ã«åªããããã©ãŒãã³ã¹ãçºæ®ããŸãã
ã¢ãã«ãã©ã¡ãŒã¿ãå°ãªããªããŸããæ³šç®ãã¹ãããšã«ãConvBERTbase ã¢ãã«ã¯ 86.4 GLUE ã¹ã³ã¢ãéæããELECTRAbase ããã 0.7 é«ãã®ã«å¯Ÿãã
ãã¬ãŒãã³ã°ã³ã¹ã㯠1/4 æªæºã§ããã³ãŒããšäºåãã¬ãŒãã³ã°ãããã¢ãã«ããªãªãŒã¹ãããŸãã*
ãã®ã¢ãã«ã¯ã[abhishek](https://huggingface.co/abhishek) ã«ãã£ãŠæäŸãããŸããããªãªãžãã«ã®å®è£
ãèŠã€ãããŸã
ãã: https://github.com/yitu-opensource/ConvBert
## Usage tips
ConvBERT ãã¬ãŒãã³ã°ã®ãã³ã㯠BERT ã®ãã³ããšäŒŒãŠããŸãã䜿çšäžã®ãã³ãã«ã€ããŠã¯ã[BERT ããã¥ã¡ã³ã](bert) ãåç
§ããŠãã ããã
## Resources
- [ããã¹ãåé¡ã¿ã¹ã¯ã¬ã€ã](../tasks/sequence_classification)
- [ããŒã¯ã³åé¡ã¿ã¹ã¯ã¬ã€ã](../tasks/token_classification)
- [質ååçã¿ã¹ã¯ ã¬ã€ã](../tasks/question_answering)
- [ãã¹ã¯ãããèšèªã¢ããªã³ã° ã¿ã¹ã¯ ã¬ã€ã](../tasks/masked_lang_modeling)
- [å€è¢éžæã¿ã¹ã¯ ã¬ã€ã](../tasks/multiple_choice)
## ConvBertConfig
[[autodoc]] ConvBertConfig
## ConvBertTokenizer
[[autodoc]] ConvBertTokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
## ConvBertTokenizerFast
[[autodoc]] ConvBertTokenizerFast
<frameworkcontent>
<pt>
## ConvBertModel
[[autodoc]] ConvBertModel
- forward
## ConvBertForMaskedLM
[[autodoc]] ConvBertForMaskedLM
- forward
## ConvBertForSequenceClassification
[[autodoc]] ConvBertForSequenceClassification
- forward
## ConvBertForMultipleChoice
[[autodoc]] ConvBertForMultipleChoice
- forward
## ConvBertForTokenClassification
[[autodoc]] ConvBertForTokenClassification
- forward
## ConvBertForQuestionAnswering
[[autodoc]] ConvBertForQuestionAnswering
- forward
</pt>
<tf>
## TFConvBertModel
[[autodoc]] TFConvBertModel
- call
## TFConvBertForMaskedLM
[[autodoc]] TFConvBertForMaskedLM
- call
## TFConvBertForSequenceClassification
[[autodoc]] TFConvBertForSequenceClassification
- call
## TFConvBertForMultipleChoice
[[autodoc]] TFConvBertForMultipleChoice
- call
## TFConvBertForTokenClassification
[[autodoc]] TFConvBertForTokenClassification
- call
## TFConvBertForQuestionAnswering
[[autodoc]] TFConvBertForQuestionAnswering
- call
</tf>
</frameworkcontent>
| 0 |
hf_public_repos/transformers/docs/source/ja | hf_public_repos/transformers/docs/source/ja/model_doc/barthez.md | <!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# BARThez
## Overview
BARThez ã¢ãã«ã¯ãMoussa Kamal EddineãAntoine J.-P ã«ãã£ãŠ [BARThez: a Skilled Pretrained French Sequence-to-Sequence Model](https://arxiv.org/abs/2010.12321) ã§ææ¡ãããŸããããã£ã¯ã·ãšããã«ãªã¹ã»ãŽã¡ãžã«ãžã£ã³ãã¹ã10æ23æ¥ã
2020幎ã
è«æã®èŠçŽ:
*åž°çŽç転移åŠç¿ã¯ãèªå·±æåž«ããåŠç¿ã«ãã£ãŠå¯èœã«ãªããèªç¶èšèªåŠçå
šäœãå®è¡ããŸãã
(NLP) åéã¯ãBERT ã BART ãªã©ã®ã¢ãã«ã«ãããç¡æ°ã®èªç¶èšèªã«æ°ããªæå
端æè¡ã確ç«ããåµãå·»ãèµ·ãããŠããŸãã
ã¿ã¹ã¯ãçè§£ããããšãããã€ãã®æ³šç®ãã¹ãäŸå€ã¯ãããŸãããå©çšå¯èœãªã¢ãã«ãšç ç©¶ã®ã»ãšãã©ã¯ã
è±èªã察象ã«å®æœãããŸããããã®äœåã§ã¯ããã©ã³ã¹èªçšã®æåã® BART ã¢ãã«ã§ãã BARTez ã玹ä»ããŸãã
ïŒæã
ã®ç¥ãéãã«ïŒã BARThez ã¯ãéå»ã®ç ç©¶ããåŸãéåžžã«å€§èŠæš¡ãªåäžèšèªãã©ã³ã¹èªã³ãŒãã¹ã§äºåãã¬ãŒãã³ã°ãããŸãã
BART ã®æåã¹ããŒã ã«åãããŠèª¿æŽããŸãããæ¢åã® BERT ããŒã¹ã®ãã©ã³ã¹èªã¢ãã«ãšã¯ç°ãªãã
CamemBERT ãš FlauBERTãBARThez ã¯ããšã³ã³ãŒãã ãã§ãªãã
ãã®ãã³ãŒãã¯äºåãã¬ãŒãã³ã°ãããŠããŸãã FLUE ãã³ãããŒã¯ããã®èå¥ã¿ã¹ã¯ã«å ããŠãBARThez ãæ°ããè©äŸ¡ã«åºã¥ããŠè©äŸ¡ããŸãã
ãã®è«æãšãšãã«ãªãªãŒã¹ããèŠçŽããŒã¿ã»ãããOrangeSumããŸãããã§ã«è¡ãããŠããäºåãã¬ãŒãã³ã°ãç¶ç¶ããŸãã
BARTHez ã®ã³ãŒãã¹äžã§å€èšèª BART ãäºåèšç·ŽããçµæãšããŠåŸãããã¢ãã« (mBARTHez ãšåŒã¶) ãæ¬¡ã®ããšã瀺ããŸãã
ããã©ã® BARThez ã倧å¹
ã«åŒ·åããCamemBERT ã FlauBERT ãšåçããããäžåããŸãã*
ãã®ã¢ãã«ã¯ [moussakam](https://huggingface.co/moussakam) ã«ãã£ãŠå¯çš¿ãããŸãããèè
ã®ã³ãŒãã¯[ãã](https://github.com/moussaKam/BARThez)ã«ãããŸãã
<Tip>
BARThez ã®å®è£
ã¯ãããŒã¯ã³åãé€ã㊠BART ãšåãã§ãã詳现ã«ã€ããŠã¯ã[BART ããã¥ã¡ã³ã](bart) ãåç
§ããŠãã ããã
æ§æã¯ã©ã¹ãšãã®ãã©ã¡ãŒã¿ã BARThez åºæã®ããŒã¯ãã€ã¶ãŒã«ã€ããŠã¯ä»¥äžã«èšèŒãããŠããŸãã
</Tip>
### Resources
- BARThez ã¯ãBART ãšåæ§ã®æ¹æ³ã§ã·ãŒã±ã³ã¹éã®ã¿ã¹ã¯ã埮調æŽã§ããŸãã以äžã確èªããŠãã ããã
[examples/pytorch/summarization/](https://github.com/huggingface/transformers/tree/main/examples/pytorch/summarization/README.md)ã
## BarthezTokenizer
[[autodoc]] BarthezTokenizer
## BarthezTokenizerFast
[[autodoc]] BarthezTokenizerFast
| 0 |
hf_public_repos/transformers/docs/source/ja | hf_public_repos/transformers/docs/source/ja/model_doc/clip.md | <!--Copyright 2021 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# CLIP
## Overview
CLIP ã¢ãã«ã¯ãAlec RadfordãJong Wook KimãChris HallacyãAditya RameshãGabriel Goh Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020) ã§ææ¡ãããŸããã
ãµã³ãã£ãã»ã¢ã¬ã«ã¯ã«ãã®ãªãã·ã¥ã»ãµã¹ããªãŒãã¢ãã³ãã»ã¢ã¹ã±ã«ããã¡ã©ã»ãã·ã¥ãã³ããžã£ãã¯ã»ã¯ã©ãŒã¯ãã°ã¬ããã§ã³ã»ã¯ã«ãŒã¬ãŒãã€ãªã€ã»ãµãã±ãŽã¡ãŒãã¯ãªãã
(Contrastive Language-Image Pre-Training) ã¯ãããŸããŸãª (ç»åãããã¹ã) ãã¢ã§ãã¬ãŒãã³ã°ããããã¥ãŒã©ã« ãããã¯ãŒã¯ã§ããããã
çŽæ¥æé©åããããšãªããäžããããç»åããæãé¢é£æ§ã®é«ãããã¹ã ã¹ãããããäºæž¬ããããã«èªç¶èšèªã§æç€ºãããŸãã
GPT-2 ããã³ 3 ã®ãŒãã·ã§ããæ©èœãšåæ§ã«ãã¿ã¹ã¯ã«å¯ŸããŠã
è«æã®èŠçŽã¯æ¬¡ã®ãšããã§ãã
*æå
端ã®ã³ã³ãã¥ãŒã¿ãŒ ããžã§ã³ ã·ã¹ãã ã¯ããããããå®ãããããªããžã§ã¯ã ã«ããŽãªã®åºå®ã»ãããäºæž¬ããããã«ãã¬ãŒãã³ã°ãããŠããŸãããã
å¶éããã圢åŒã®ç£èŠã§ã¯ãæå®ããããã«è¿œå ã®ã©ãã«ä»ãããŒã¿ãå¿
èŠãšãªããããäžè¬æ§ãšäœ¿ãããããå¶éãããŸãã
ãã®ä»ã®èŠèŠçãªã³ã³ã»ãããç»åã«é¢ããçã®ããã¹ãããçŽæ¥åŠç¿ããããšã¯ã
ããåºç¯ãªç£ç£æºãã©ã®ãã£ãã·ã§ã³ã衚瀺ãããããäºæž¬ãããšããåçŽãªäºåãã¬ãŒãã³ã° ã¿ã¹ã¯ãæå¹ã§ããããšã瀺ããŸãã
400 ã®ããŒã¿ã»ããã§ SOTA ç»å衚çŸãæåããåŠç¿ããããã®å¹ççãã€ã¹ã±ãŒã©ãã«ãªæ¹æ³ã¯ã©ã®ç»åã§ãã
ã€ã³ã¿ãŒãããããåéãããæ°çŸäžã®ïŒç»åãããã¹ãïŒãã¢ãäºåãã¬ãŒãã³ã°åŸãèªç¶èšèªã䜿çšããŠåç
§ããŸãã
èŠèŠçãªæŠå¿µãåŠç¿ãïŒãŸãã¯æ°ããæŠå¿µã説æãïŒãäžæµã®ã¿ã¹ã¯ãžã®ã¢ãã«ã®ãŒãã·ã§ãã転éãå¯èœã«ããŸããç§ãã¡ã¯å匷ããŸã
30 ãè¶
ããããŸããŸãªæ¢åã®ã³ã³ãã¥ãŒã¿ãŒ ããžã§ã³ ããŒã¿ã»ããã§ã¿ã¹ã¯ããŸããã£ãŠãã³ãããŒã¯ãè¡ãããšã«ããããã®ã¢ãããŒãã®ããã©ãŒãã³ã¹ãè©äŸ¡ããŸãã
OCRããããªå
ã®ã¢ã¯ã·ã§ã³èªèãå°ççäœçœ®ç¹å®ãããã³ããŸããŸãªçš®é¡ã®ãã现ãããªããžã§ã¯ãåé¡ãªã©ãã®
ã¢ãã«ã¯ã»ãšãã©ã®ã¿ã¹ã¯ã«ç°¡åã«ç§»è¡ã§ããå€ãã®å Žåãå¿
èŠããªããŠãå®å
šã«ç£èŠãããããŒã¹ã©ã€ã³ãšç«¶åããŸãã
ããŒã¿ã»ããåºæã®ãã¬ãŒãã³ã°ã«é©ããŠããŸããããšãã°ãImageNet ãŒãã·ã§ããã§ã¯ãªãªãžãã«ã® ResNet-50 ã®ç²ŸåºŠãšäžèŽããŸãã
ãã¬ãŒãã³ã°ã«äœ¿çšããã 128 äžã®ãã¬ãŒãã³ã° ãµã³ãã«ã䜿çšããå¿
èŠã¯ãããŸãããã³ãŒãããªãªãŒã¹ããäºåãã¬ãŒãã³ã°æžã¿
ã¢ãã«ã®éã¿ã¯ãã® https URL ã§ç¢ºèªã§ããŸãã*
ãã®ã¢ãã«ã¯ [valhalla](https://huggingface.co/valhalla) ã«ãã£ãŠæäŸãããŸãããå
ã®ã³ãŒã㯠[ãã](https://github.com/openai/CLIP) ã«ãããŸãã
## Usage tips and example
CLIP ã¯ããã«ãã¢ãŒãã«ãªããžã§ã³ããã³èšèªã¢ãã«ã§ããç»åãšããã¹ãã®é¡äŒŒæ§ããŒãã·ã§ããç»åã«äœ¿çšã§ããŸãã
åé¡ã CLIP ã¯ãViT ã®ãããªãã©ã³ã¹ãã©ãŒããŒã䜿çšããŠèŠèŠçç¹åŸŽãååŸããå æèšèªã¢ãã«ã䜿çšããŠããã¹ããååŸããŸã
ç¹åŸŽã次ã«ãããã¹ããšèŠèŠã®äž¡æ¹ã®ç¹åŸŽããåãæ¬¡å
ã®æœåšç©ºéã«æåœ±ãããŸããããã
æåœ±ãããç»åãšããã¹ãã®ç¹åŸŽéã®ç©ãåæ§ã®ã¹ã³ã¢ãšããŠäœ¿çšãããŸãã
ç»åã Transformer ãšã³ã³ãŒãã«äŸçµŠããããã«ãåç»åã¯åºå®ãµã€ãºã®éè€ããªããããã®ã·ãŒã±ã³ã¹ã«åå²ãããŸãã
ãããã¯ç·åœ¢ã«åã蟌ãŸããŸãã [CLS] ããŒã¯ã³ã¯ãã€ã¡ãŒãžå
šäœã®è¡šçŸãšããŠæ©èœããããã«è¿œå ãããŸããäœå®¶ãã¡
ãŸãã絶察äœçœ®åã蟌ã¿ã远å ããçµæãšããŠåŸããããã¯ãã«ã®ã·ãŒã±ã³ã¹ãæšæºã® Transformer ãšã³ã³ãŒãã«äŸçµŠããŸãã
[`CLIPImageProcessor`] ã䜿çšããŠãã¢ãã«ã®ç»åã®ãµã€ãºå€æŽ (ãŸãã¯åã¹ã±ãŒã«) ããã³æ£èŠåãè¡ãããšãã§ããŸãã
[`CLIPTokenizer`] ã¯ããã¹ãã®ãšã³ã³ãŒãã«äœ¿çšãããŸãã [`CLIPProcessor`] ã¯ã©ããããŸã
[`CLIPImageProcessor`] ãš [`CLIPTokenizer`] ãäž¡æ¹ã®åäžã€ã³ã¹ã¿ã³ã¹ã«çµ±å
ããã¹ãããšã³ã³ãŒãããŠç»åãæºåããŸããæ¬¡ã®äŸã¯ã次ã®ã¡ãœããã䜿çšããŠç»åãšããã¹ãã®é¡äŒŒæ§ã¹ã³ã¢ãååŸããæ¹æ³ã瀺ããŠããŸãã
[`CLIPProcessor`] ãš [`CLIPModel`]ã
```python
>>> from PIL import Image
>>> import requests
>>> from transformers import CLIPProcessor, CLIPModel
>>> model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32")
>>> processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32")
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> inputs = processor(text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True)
>>> outputs = model(**inputs)
>>> logits_per_image = outputs.logits_per_image # this is the image-text similarity score
>>> probs = logits_per_image.softmax(dim=1) # we can take the softmax to get the label probabilities
```
## Resources
CLIP ã䜿ãå§ããã®ã«åœ¹ç«ã€å
¬åŒ Hugging Face ããã³ã³ãã¥ãã㣠(ð ã§ç€ºãããŠãã) ãªãœãŒã¹ã®ãªã¹ãã
- [ãªã¢ãŒã ã»ã³ã·ã³ã° (è¡æ) ç»åãšãã£ãã·ã§ã³ã䜿çšãã CLIP ã®åŸ®èª¿æŽ](https://huggingface.co/blog/fine-tune-clip-rsicd)ã[RSICD ããŒã¿ã»ãã] ã䜿çšã㊠CLIP ã埮調æŽããæ¹æ³ã«é¢ããããã°æçš¿(https://github.com/201528014227051/RSICD_optimal) ãšãããŒã¿æ¡åŒµã«ããããã©ãŒãã³ã¹ã®å€åã®æ¯èŒã
- ãã® [ãµã³ãã« ã¹ã¯ãªãã](https://github.com/huggingface/transformers/tree/main/examples/pytorch/contrastive-image-text) ã¯ããã¬- [COCO ããŒã¿ã»ãã](https://cocodataset.org/#home) ã䜿çšããŠãã¬ãŒãã³ã°ãããããžã§ã³ããã³ããã¹ã ãšã³ã³ãŒããŒã
<PipelineTag pipeline="image-to-text"/>
- ç»åãã£ãã·ã§ã³ã®ããŒã æ€çŽ¢ã«ããæšè«ã«äºåãã¬ãŒãã³ã°æžã¿ CLIP ã䜿çšããæ¹æ³ã«é¢ãã [ããŒãããã¯](https://colab.research.google.com/drive/1tuoAC5F4sC7qid56Z0ap-stR3rwdk0ZV?usp=sharing)ã ð
**ç»åæ€çŽ¢**
- äºåãã¬ãŒãã³ã°ããã CLIP ã䜿çšããç»åæ€çŽ¢ãš MRR (å¹³åçžäºã©ã³ã¯) ã¹ã³ã¢ã®èšç®ã«é¢ãã [ããŒãããã¯](https://colab.research.google.com/drive/1bLVwVKpAndpEDHqjzxVPr_9nGrSbuOQd?usp=sharing)ã ð
- ç»åã®ååŸãšé¡äŒŒæ§ã¹ã³ã¢ã®è¡šç€ºã«é¢ãã [ããŒãããã¯](https://colab.research.google.com/github/deep-diver/image_search_with_natural_language/blob/main/notebooks/Image_Search_CLIP.ipynb)ã ð
- å€èšèª CLIP ã䜿çšããŠç»åãšããã¹ããåããã¯ãã«ç©ºéã«ãããã³ã°ããæ¹æ³ã«é¢ãã [ããŒãããã¯](https://colab.research.google.com/drive/1xO-wC_m_GNzgjIBQ4a4znvQkvDoZJvH4?usp=sharing)ã ð
- ã䜿çšããŠã»ãã³ãã£ã㯠ã€ã¡ãŒãžæ€çŽ¢ã§ CLIP ãå®è¡ããæ¹æ³ã«é¢ãã [ããŒãããã¯](https://colab.research.google.com/github/vivien000/clip-demo/blob/master/clip.ipynb#scrollTo=uzdFhRGqiWkR) [Unsplash](https://unsplash.com) ããã³ [TMBD](https://www.themoviedb.org/) ããŒã¿ã»ããã ð
**説æå¯èœæ§**
- å
¥åããŒã¯ã³ãšç»åã»ã°ã¡ã³ãã®é¡äŒŒæ§ãèŠèŠåããæ¹æ³ã«é¢ãã [ããŒãããã¯](https://colab.research.google.com/github/hila-chefer/Transformer-MM-Explainability/blob/main/CLIP_explainability.ipynb)ã ð
ããã«å«ãããªãœãŒã¹ã®éä¿¡ã«èå³ãããå Žåã¯ããæ°è»œã«ãã« ãªã¯ãšã¹ããéããŠãã ããã審æ»ãããŠããã ããŸãã
ãªãœãŒã¹ã¯ãæ¢åã®ãªãœãŒã¹ãè€è£œããã®ã§ã¯ãªããäœãæ°ãããã®ã瀺ãããšãçæ³çã§ãã
## CLIPConfig
[[autodoc]] CLIPConfig
- from_text_vision_configs
## CLIPTextConfig
[[autodoc]] CLIPTextConfig
## CLIPVisionConfig
[[autodoc]] CLIPVisionConfig
## CLIPTokenizer
[[autodoc]] CLIPTokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
## CLIPTokenizerFast
[[autodoc]] CLIPTokenizerFast
## CLIPImageProcessor
[[autodoc]] CLIPImageProcessor
- preprocess
## CLIPFeatureExtractor
[[autodoc]] CLIPFeatureExtractor
## CLIPProcessor
[[autodoc]] CLIPProcessor
<frameworkcontent>
<pt>
## CLIPModel
[[autodoc]] CLIPModel
- forward
- get_text_features
- get_image_features
## CLIPTextModel
[[autodoc]] CLIPTextModel
- forward
## CLIPTextModelWithProjection
[[autodoc]] CLIPTextModelWithProjection
- forward
## CLIPVisionModelWithProjection
[[autodoc]] CLIPVisionModelWithProjection
- forward
## CLIPVisionModel
[[autodoc]] CLIPVisionModel
- forward
</pt>
<tf>
## TFCLIPModel
[[autodoc]] TFCLIPModel
- call
- get_text_features
- get_image_features
## TFCLIPTextModel
[[autodoc]] TFCLIPTextModel
- call
## TFCLIPVisionModel
[[autodoc]] TFCLIPVisionModel
- call
</tf>
<jax>
## FlaxCLIPModel
[[autodoc]] FlaxCLIPModel
- __call__
- get_text_features
- get_image_features
## FlaxCLIPTextModel
[[autodoc]] FlaxCLIPTextModel
- __call__
## FlaxCLIPTextModelWithProjection
[[autodoc]] FlaxCLIPTextModelWithProjection
- __call__
## FlaxCLIPVisionModel
[[autodoc]] FlaxCLIPVisionModel
- __call__
</jax>
</frameworkcontent>
| 0 |
hf_public_repos/transformers/docs/source/ja | hf_public_repos/transformers/docs/source/ja/model_doc/blip-2.md | <!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# BLIP-2
## Overview
BLIP-2 ã¢ãã«ã¯ã[BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) ã§ææ¡ãããŸããã
Junnan Li, Dongxu Li, Silvio Savarese, Steven Hoi.ã»ãµãã¬ãŒãŒãã¹ãã£ãŒãã³ã»ãã€ã BLIP-2 ã¯ã軜éã® 12 å±€ Transformer ããã¬ãŒãã³ã°ããããšã§ãããªãŒãºãããäºåãã¬ãŒãã³ã°æžã¿ç»åãšã³ã³ãŒããŒãšå€§èŠæš¡èšèªã¢ãã« (LLM) ãæŽ»çšããŸãã
ãããã®éã«ãšã³ã³ãŒããŒãé
眮ããããŸããŸãªèŠèŠèšèªã¿ã¹ã¯ã§æå
端ã®ããã©ãŒãã³ã¹ãå®çŸããŸããæã泚ç®ãã¹ãç¹ã¯ãBLIP-2 ã 800 åãã©ã¡ãŒã¿ ã¢ãã«ã§ãã [Flamingo](https://arxiv.org/abs/2204.14198) ã 8.7% æ¹åããŠããããšã§ãã
ãŒãã·ã§ãã VQAv2 ã§ã¯ãã¬ãŒãã³ã°å¯èœãªãã©ã¡ãŒã¿ãŒã 54 åã® 1 ã«æžå°ããŸãã
è«æã®èŠçŽã¯æ¬¡ã®ãšããã§ãã
*å€§èŠæš¡ã¢ãã«ã®ãšã³ãããŒãšã³ãã®ãã¬ãŒãã³ã°ã«ãããèŠèŠãšèšèªã®äºåãã¬ãŒãã³ã°ã®ã³ã¹ãã¯ãŸããŸãæ³å€ãªãã®ã«ãªã£ãŠããŠããŸãããã®è«æã§ã¯ãåžè²©ã®åçµæžã¿äºåãã¬ãŒãã³ã°ç»åãšã³ã³ãŒããšåçµãããå€§èŠæš¡èšèªã¢ãã«ããèŠèŠèšèªã®äºåãã¬ãŒãã³ã°ãããŒãã¹ãã©ãããããæ±çšçã§å¹ççãªäºåãã¬ãŒãã³ã°æŠç¥ã§ãã BLIP-2 ãææ¡ããŸãã BLIP-2 ã¯ã2 段éã§äºåãã¬ãŒãã³ã°ããã軜éã® Querying Transformer ã§ã¢ããªãã£ã®ã®ã£ãããæ©æž¡ãããŸããæåã®ã¹ããŒãžã§ã¯ãããªãŒãºãããç»åãšã³ã³ãŒããŒããåŠç¿ããèŠèŠèšèªè¡šçŸãããŒãã¹ãã©ããããŸãã第 2 段éã§ã¯ãåçµãããèšèªã¢ãã«ããèŠèŠããèšèªãžã®çæåŠç¿ãããŒãã¹ãã©ããããŸãã BLIP-2 ã¯ãæ¢åã®æ¹æ³ããããã¬ãŒãã³ã°å¯èœãªãã©ã¡ãŒã¿ãŒã倧å¹
ã«å°ãªãã«ãããããããããŸããŸãªèŠèŠèšèªã¿ã¹ã¯ã§æå
端ã®ããã©ãŒãã³ã¹ãå®çŸããŸããããšãã°ãç§ãã¡ã®ã¢ãã«ã¯ããã¬ãŒãã³ã°å¯èœãªãã©ã¡ãŒã¿ãŒã 54 åã® 1 å°ãªããŒãã·ã§ãã VQAv2 ã§ãFlamingo80B ã 8.7% äžåã£ãŠããŸãããŸããèªç¶èšèªã®åœä»€ã«åŸãããšãã§ããããŒãã·ã§ããç»åããããã¹ããžã®çæãšããã¢ãã«ã®æ°ããæ©èœãå®èšŒããŸã*
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/blip2_architecture.jpg"
alt="drawing" width="600"/>
<small> BLIP-2 ã¢ãŒããã¯ãã£ã <a href="https://arxiv.org/abs/2301.12597">å
ã®è«æããæç²ã</a> </small>
ãã®ã¢ãã«ã¯ã[nielsr](https://huggingface.co/nielsr) ã«ãã£ãŠæäŸãããŸããã
å
ã®ã³ãŒã㯠[ãã](https://github.com/salesforce/LAVIS/tree/5ee63d688ba4cebff63acee04adaef2dee9af207) ã«ãããŸãã
## Usage tips
- BLIP-2 ã¯ãç»åãšãªãã·ã§ã³ã®ããã¹ã ããã³ãããæå®ããŠæ¡ä»¶ä»ãããã¹ããçæããããã«äœ¿çšã§ããŸããæšè«æã«ã¯ã [`generate`] ã¡ãœããã䜿çšããããšããå§ãããŸãã
- [`Blip2Processor`] ã䜿çšããŠã¢ãã«çšã®ç»åãæºåããäºæž¬ãããããŒã¯ã³ ID ããã³ãŒãããŠããã¹ãã«æ»ãããšãã§ããŸãã
## Resources
BLIP-2 ã®äœ¿çšãéå§ããã®ã«åœ¹ç«ã€å
¬åŒ Hugging Face ããã³ã³ãã¥ãã㣠(ð ã§ç€ºãããŠãã) ãªãœãŒã¹ã®ãªã¹ãã
- ç»åãã£ãã·ã§ã³ãããžã¥ã¢ã«è³ªåå¿ç (VQA)ãããã³ãã£ããã®ãããªäŒè©±ã®ããã® BLIP-2 ã®ã㢠ããŒãããã¯ã¯ã[ãã¡ã](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/BLIP-2) ã«ãããŸãã
ããã«å«ãããªãœãŒã¹ã®éä¿¡ã«èå³ãããå Žåã¯ããæ°è»œã«ãã« ãªã¯ãšã¹ããéããŠãã ããã審æ»ãããŠããã ããŸãããªãœãŒã¹ã¯ãæ¢åã®ãªãœãŒã¹ãè€è£œããã®ã§ã¯ãªããäœãæ°ãããã®ã瀺ãããšãçæ³çã§ãã
## Blip2Config
[[autodoc]] Blip2Config
- from_vision_qformer_text_configs
## Blip2VisionConfig
[[autodoc]] Blip2VisionConfig
## Blip2QFormerConfig
[[autodoc]] Blip2QFormerConfig
## Blip2Processor
[[autodoc]] Blip2Processor
## Blip2VisionModel
[[autodoc]] Blip2VisionModel
- forward
## Blip2QFormerModel
[[autodoc]] Blip2QFormerModel
- forward
## Blip2Model
[[autodoc]] Blip2Model
- forward
- get_text_features
- get_image_features
- get_qformer_features
## Blip2ForConditionalGeneration
[[autodoc]] Blip2ForConditionalGeneration
- forward
- generate | 0 |
hf_public_repos/transformers/docs/source/ja | hf_public_repos/transformers/docs/source/ja/model_doc/bert.md | <!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# BERT
<div class="flex flex-wrap space-x-1">
<a href="https://huggingface.co/models?filter=bert">
<img alt="Models" src="https://img.shields.io/badge/All_model_pages-bert-blueviolet">
</a>
<a href="https://huggingface.co/spaces/docs-demos/bert-base-uncased">
<img alt="Spaces" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue">
</a>
</div>
## Overview
BERT ã¢ãã«ã¯ãJacob DevlinãMing-Wei ChangãKenton LeeãKristina Toutanova ã«ãã£ãŠ [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805) ã§ææ¡ãããŸãããããã¯
ãã¹ã¯ãããèšèªã¢ããªã³ã°ç®æšãšæ¬¡ã®æã®çµã¿åããã䜿çšããŠäºåãã¬ãŒãã³ã°ãããåæ¹åãã©ã³ã¹ãã©ãŒããŒ
Toronto Book Corpus ãš Wikipedia ãããªãå€§èŠæš¡ãªã³ãŒãã¹ã§ã®äºæž¬ã
è«æã®èŠçŽã¯æ¬¡ã®ãšããã§ãã
*BERT ãšåŒã°ããæ°ããèšèªè¡šçŸã¢ãã«ãå°å
¥ããŸãããã㯠Bidirectional Encoder Representations ã®ç¥ã§ã
ãã©ã³ã¹ãã©ãŒããŒãããæè¿ã®èšèªè¡šçŸã¢ãã«ãšã¯ç°ãªããBERT ã¯æ·±ã忹忧ãäºåã«ãã¬ãŒãã³ã°ããããã«èšèšãããŠããŸãã
ãã¹ãŠã®ã¬ã€ã€ãŒã®å·Šãšå³ã®äž¡æ¹ã®ã³ã³ããã¹ããå
±åã§æ¡ä»¶ä»ãããããšã«ãããã©ãã«ã®ãªãããã¹ããã衚çŸããŸããçµæãšããŠã
äºåãã¬ãŒãã³ã°ããã BERT ã¢ãã«ã¯ãåºåå±€ã 1 ã€è¿œå ããã ãã§åŸ®èª¿æŽããŠãæå
端ã®ã¢ãã«ãäœæã§ããŸãã
å®è³ªçãªã¿ã¹ã¯åºæã®ãã®ãå¿
èŠãšããã質åå¿çãèšèªæšè«ãªã©ã®å¹
åºãã¿ã¹ã¯ã«å¯Ÿå¿
ã¢ãŒããã¯ãã£ã®å€æŽã*
*BERT ã¯æŠå¿µçã«ã¯ã·ã³ãã«ã§ãããçµéšçã«åŒ·åã§ãã 11 ã®èªç¶ãªèŠçŽ ã«é¢ããæ°ããæå
端ã®çµæãåŸãããŸãã
èšèªåŠçã¿ã¹ã¯ïŒGLUE ã¹ã³ã¢ã 80.5% ã«æŒãäžããïŒ7.7% ãã€ã³ãã®çµ¶å¯Ÿæ¹åïŒãMultiNLI ãå«ãïŒ
粟床㯠86.7% (çµ¶å¯Ÿå€ 4.6% åäž)ãSQuAD v1.1 質åå¿çãã¹ã F1 㯠93.2 (çµ¶å¯Ÿå€ 1.5 ãã€ã³ã)
æ¹å) ããã³ SQuAD v2.0 ãã¹ã F1 ãã 83.1 (5.1 ãã€ã³ãã®çµ¶å¯Ÿæ¹å)ã*
## Usage tips
- BERT ã¯çµ¶å¯Ÿäœçœ®åã蟌ã¿ãåããã¢ãã«ã§ãããããéåžžã¯å
¥åãå³åŽã«ããã£ã³ã°ããããšããå§ãããŸãã
å·Šã
- BERT ã¯ããã¹ã¯èšèªã¢ããªã³ã° (MLM) ããã³æ¬¡ã®æäºæž¬ (NSP) ã®ç®æšã䜿çšããŠãã¬ãŒãã³ã°ãããŸãããããã¯
ãã¹ã¯ãããããŒã¯ã³ã®äºæž¬ã NLU ã§ã¯äžè¬ã«å¹ççã§ãããããã¹ãçæã«ã¯æé©ã§ã¯ãããŸããã
- ã©ã³ãã ãã¹ãã³ã°ã䜿çšããŠå
¥åãç Žå£ããŸããããæ£ç¢ºã«ã¯ãäºåãã¬ãŒãã³ã°äžã«ãããŒã¯ã³ã®æå®ãããå²å (é垞㯠15%) ãæ¬¡ã«ãã£ãŠãã¹ã¯ãããŸãã
* 確ç0.8ã®ç¹å¥ãªãã¹ã¯ããŒã¯ã³
* 確ç 0.1 ã§ãã¹ã¯ãããããŒã¯ã³ãšã¯ç°ãªãã©ã³ãã ãªããŒã¯ã³
* 確ç 0.1 ã®åãããŒã¯ã³
- ã¢ãã«ã¯å
ã®æãäºæž¬ããå¿
èŠããããŸããã2 çªç®ã®ç®çããããŸããå
¥å㯠2 ã€ã®æ A ãš B (éã«åé¢ããŒã¯ã³ãã) ã§ãã確ç 50% ã§ã¯ãæã¯ã³ãŒãã¹å
ã§é£ç¶ããŠããŸãããæ®ãã® 50% ã§ã¯é¢é£æ§ããããŸãããã¢ãã«ã¯ãæãé£ç¶ããŠãããã©ãããäºæž¬ããå¿
èŠããããŸãã
ãã®ã¢ãã«ã¯ [thomwolf](https://huggingface.co/thomwolf) ã«ãã£ãŠæäŸãããŸãããå
ã®ã³ãŒã㯠[ãã¡ã](https://github.com/google-research/bert) ã«ãããŸãã
## Resources
BERT ãå§ããã®ã«åœ¹ç«ã€å
¬åŒ Hugging Face ããã³ã³ãã¥ãã㣠(ð ã§ç€ºããã) ãªãœãŒã¹ã®ãªã¹ããããã«å«ãããªãœãŒã¹ã®éä¿¡ã«èå³ãããå Žåã¯ããæ°è»œã«ãã« ãªã¯ãšã¹ããéããŠãã ããã審æ»ãããŠããã ããŸãããªãœãŒã¹ã¯ãæ¢åã®ãªãœãŒã¹ãè€è£œããã®ã§ã¯ãªããäœãæ°ãããã®ã瀺ãããšãçæ³çã§ãã
<PipelineTag pipeline="text-classification"/>
- ã«é¢ããããã°æçš¿ [å¥ã®èšèªã§ã® BERT ããã¹ãåé¡](https://www.philschmid.de/bert-text-classification-in-a-different-language)ã
- [ãã«ãã©ãã« ããã¹ãåé¡ã®ããã® BERT (ããã³ãã®å人) ã®åŸ®èª¿æŽ](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/BERT/Fine_tuning_BERT_(and_friends)_for_multi_label_text_classification.ipynb) ã®ããŒãããã¯.
- æ¹æ³ã«é¢ããããŒããã㯠[PyTorch ã䜿çšãããã«ãã©ãã«åé¡ã®ããã® BERT ã®åŸ®èª¿æŽ](https://colab.research.google.com/github/abhmishra91/transformers-tutorials/blob/master/transformers_multi_label_classification.ipynb)ã
- æ¹æ³ã«é¢ããããŒããã㯠[èŠçŽã®ããã« BERT ã䜿çšã㊠EncoderDecoder ã¢ãã«ããŠã©ãŒã ã¹ã¿ãŒããã](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/BERT2BERT_for_CNN_Dailymail.ipynb)ã
- [`BertForSequenceClassification`] ã¯ããã® [ãµã³ãã« ã¹ã¯ãªãã](https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification) ããã³ [ããŒãããã¯](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification.ipynb)ã
- [`TFBertForSequenceClassification`] ã¯ããã® [ãµã³ãã« ã¹ã¯ãªãã](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/text-classification) ããã³ [ããŒãããã¯](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification-tf.ipynb)ã
- [`FlaxBertForSequenceClassification`] ã¯ããã® [ãµã³ãã« ã¹ã¯ãªãã](https://github.com/huggingface/transformers/tree/main/examples/flax/text-classification) ããã³ [ããŒãããã¯](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification_flax.ipynb)ã
- [ããã¹ãåé¡ã¿ã¹ã¯ã¬ã€ã](../tasks/sequence_classification)
<PipelineTag pipeline="token-classification"/>
- [Hugging Face Transformers with Keras: Fine-tune a non-English BERT for Named Entity Recognition](https://www.philschmid.de/huggingface-transformers-keras-tf) ã®äœ¿ç𿹿³ã«é¢ããããã°æçš¿ã
- ååèªã®æåã®åèªéšåã®ã¿ã䜿çšãã [åºæè¡šçŸèªèã®ããã® BERT ã®åŸ®èª¿æŽ](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/Custom_Named_Entity_Recognition_with_BERT_only_first_wordpiece.ipynb) ã®ããŒãããã¯ããŒã¯ã³åäžã®åèªã©ãã«å
ãåèªã®ã©ãã«ããã¹ãŠã®åèªéšåã«äŒæããã«ã¯ã代ããã«ããŒãããã¯ã®ãã® [ããŒãžã§ã³](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/BERT/Custom_Named_Entity_Recognition_with_BERT.ipynb) ãåç
§ããŠãã ããã
- [`BertForTokenClassification`] ã¯ããã® [ãµã³ãã« ã¹ã¯ãªãã](https://github.com/huggingface/transformers/tree/main/examples/pytorch/token-classification) ããã³ [ããŒãããã¯](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/token_classification.ipynb)ã
- [`TFBertForTokenClassification`] ã¯ããã® [ãµã³ãã« ã¹ã¯ãªãã](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/token-classification) ããã³ [ããŒãããã¯](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/token_classification-tf.ipynb)ã
- [`FlaxBertForTokenClassification`] ã¯ããã® [ãµã³ãã« ã¹ã¯ãªãã](https://github.com/huggingface/transformers/tree/main/examples/flax/token-classification) ã«ãã£ãŠãµããŒããããŠããŸãã
- [ããŒã¯ã³åé¡](https://huggingface.co/course/chapter7/2?fw=pt) ð€ ãã°ãã§ã€ã¹ã³ãŒã¹ã®ç« ã
- [ããŒã¯ã³åé¡ã¿ã¹ã¯ã¬ã€ã](../tasks/token_classification)
<PipelineTag pipeline="fill-mask"/>
- [`BertForMaskedLM`] ã¯ããã® [ãµã³ãã« ã¹ã¯ãªãã](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling#robertabertdistilbert-and-masked-language-modeling) ã§ãµããŒããããŠããã [ããŒãããã¯](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb)ã
- [`TFBertForMaskedLM`] ã¯ããã® [ãµã³ãã« ã¹ã¯ãªãã](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/lang-modeling#run_mlmpy) ããã³ [ããŒãããã¯](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb)ã
- [`FlaxBertForMaskedLM`] ã¯ããã® [ãµã³ãã« ã¹ã¯ãªãã](https://github.com/huggingface/transformers/tree/main/examples/flax/language-modeling#masked-language-modeling) ããã³ [ããŒãããã¯]( https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/masked_language_modeling_flax.ipynb)ã
- [ãã¹ã¯ãããèšèªã¢ããªã³ã°](https://huggingface.co/course/chapter7/3?fw=pt) ð€ é¡ãã° ã³ãŒã¹ã®ç« ã
- [ãã¹ã¯ãããèšèªã¢ããªã³ã° ã¿ã¹ã¯ ã¬ã€ã](../tasks/masked_lang_modeling)
<PipelineTag pipeline="question-answering"/>
- [`BertForQuestionAnswering`] ã¯ããã® [ãµã³ãã« ã¹ã¯ãªãã](https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering) ããã³ [ããŒãããã¯](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering.ipynb)ã
- [`TFBertForQuestionAnswering`] ã¯ããã® [ãµã³ãã« ã¹ã¯ãªãã](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/question-answering) ããã³ [ããŒãããã¯](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering-tf.ipynb)ã
- [`FlaxBertForQuestionAnswering`] ã¯ããã® [ãµã³ãã« ã¹ã¯ãªãã](https://github.com/huggingface/transformers/tree/main/examples/flax/question-answering) ã§ãµããŒããããŠããŸãã
- [質ååç](https://huggingface.co/course/chapter7/7?fw=pt) ð€ ãã°ãã§ã€ã¹ã³ãŒã¹ã®ç« ã
- [質ååçã¿ã¹ã¯ ã¬ã€ã](../tasks/question_answering)
**è€æ°ã®éžæè¢**
- [`BertForMultipleChoice`] ã¯ããã® [ãµã³ãã« ã¹ã¯ãªãã](https://github.com/huggingface/transformers/tree/main/examples/pytorch/multiple-choice) ããã³ [ããŒãããã¯](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multiple_choice.ipynb)ã
- [`TFBertForMultipleChoice`] ã¯ããã® [ãµã³ãã« ã¹ã¯ãªãã](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/multiple-choice) ããã³ [ããŒãããã¯](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multiple_choice-tf.ipynb)ã
- [å€è¢éžæã¿ã¹ã¯ ã¬ã€ã](../tasks/multiple_choice)
â¡ïž **æšè«**
- æ¹æ³ã«é¢ããããã°æçš¿ [Hugging Face Transformers ãš AWS Inferentia ã䜿çšã㊠BERT æšè«ãé«éåãã](https://huggingface.co/blog/bert-inferentia-sagemaker)ã
- æ¹æ³ã«é¢ããããã°æçš¿ [GPU äžã® DeepSpeed-Inference ã䜿çšã㊠BERT æšè«ãé«éåãã](https://www.philschmid.de/bert-deepspeed-inference)ã
âïž **äºåãã¬ãŒãã³ã°**
- [Hugging Face Transformers ãš Habana Gaudi ã䜿çšãã BERT ã®äºåãã¬ãŒãã³ã°] ã«é¢ããããã°æçš¿ (https://www.philschmid.de/pre-training-bert-habana)ã
ð **ãããã€**
- æ¹æ³ã«é¢ããããã°æçš¿ [ãã°ãã§ã€ã¹æé©åã§ãã©ã³ã¹ãã©ãŒããŒã ONNX ã«å€æãã](https://www.philschmid.de/convert-transformers-to-onnx)ã
- æ¹æ³ã«é¢ããããã°æçš¿ [AWS äžã® Habana Gaudi ã䜿çšãããã°é¡ãã©ã³ã¹ãã©ãŒããŒã®ããã®æ·±å±€åŠç¿ç°å¢ã®ã»ããã¢ãã](https://www.philschmid.de/getting-started-habana-gaudi#conclusion)ã
- ã«é¢ããããã°æçš¿ [Hugging Face TransformersãAmazon SageMakerãããã³ Terraform ã¢ãžã¥ãŒã«ã䜿çšããèªåã¹ã±ãŒãªã³ã° BERT](https://www.philschmid.de/terraform-huggingface-amazon-sagemaker-advanced)ã
- ã«é¢ããããã°æçš¿ [HuggingFaceãAWS LambdaãDocker ã䜿çšãããµãŒããŒã¬ã¹ BERT](https://www.philschmid.de/serverless-bert-with-huggingface-aws-lambda-docker)ã
- ã«é¢ããããã°æçš¿ [Amazon SageMaker ãš Training Compiler ã䜿çšãã Hugging Face Transformers BERT 埮調æŽ](https://www.philschmid.de/huggingface-amazon-sagemaker-training-compiler)ã
- ã«é¢ããããã°æçš¿ [Transformers ãš Amazon SageMaker ã䜿çšãã BERT ã®ã¿ã¹ã¯åºæã®ç¥èã®èžç](https://www.philschmid.de/knowledge-distillation-bert-transformers)
## BertConfig
[[autodoc]] BertConfig
- all
## BertTokenizer
[[autodoc]] BertTokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
<frameworkcontent>
<pt>
## BertTokenizerFast
[[autodoc]] BertTokenizerFast
</pt>
<tf>
## TFBertTokenizer
[[autodoc]] TFBertTokenizer
</tf>
</frameworkcontent>
## Bert specific outputs
[[autodoc]] models.bert.modeling_bert.BertForPreTrainingOutput
[[autodoc]] models.bert.modeling_tf_bert.TFBertForPreTrainingOutput
[[autodoc]] models.bert.modeling_flax_bert.FlaxBertForPreTrainingOutput
<frameworkcontent>
<pt>
## BertModel
[[autodoc]] BertModel
- forward
## BertForPreTraining
[[autodoc]] BertForPreTraining
- forward
## BertLMHeadModel
[[autodoc]] BertLMHeadModel
- forward
## BertForMaskedLM
[[autodoc]] BertForMaskedLM
- forward
## BertForNextSentencePrediction
[[autodoc]] BertForNextSentencePrediction
- forward
## BertForSequenceClassification
[[autodoc]] BertForSequenceClassification
- forward
## BertForMultipleChoice
[[autodoc]] BertForMultipleChoice
- forward
## BertForTokenClassification
[[autodoc]] BertForTokenClassification
- forward
## BertForQuestionAnswering
[[autodoc]] BertForQuestionAnswering
- forward
</pt>
<tf>
## TFBertModel
[[autodoc]] TFBertModel
- call
## TFBertForPreTraining
[[autodoc]] TFBertForPreTraining
- call
## TFBertModelLMHeadModel
[[autodoc]] TFBertLMHeadModel
- call
## TFBertForMaskedLM
[[autodoc]] TFBertForMaskedLM
- call
## TFBertForNextSentencePrediction
[[autodoc]] TFBertForNextSentencePrediction
- call
## TFBertForSequenceClassification
[[autodoc]] TFBertForSequenceClassification
- call
## TFBertForMultipleChoice
[[autodoc]] TFBertForMultipleChoice
- call
## TFBertForTokenClassification
[[autodoc]] TFBertForTokenClassification
- call
## TFBertForQuestionAnswering
[[autodoc]] TFBertForQuestionAnswering
- call
</tf>
<jax>
## FlaxBertModel
[[autodoc]] FlaxBertModel
- __call__
## FlaxBertForPreTraining
[[autodoc]] FlaxBertForPreTraining
- __call__
## FlaxBertForCausalLM
[[autodoc]] FlaxBertForCausalLM
- __call__
## FlaxBertForMaskedLM
[[autodoc]] FlaxBertForMaskedLM
- __call__
## FlaxBertForNextSentencePrediction
[[autodoc]] FlaxBertForNextSentencePrediction
- __call__
## FlaxBertForSequenceClassification
[[autodoc]] FlaxBertForSequenceClassification
- __call__
## FlaxBertForMultipleChoice
[[autodoc]] FlaxBertForMultipleChoice
- __call__
## FlaxBertForTokenClassification
[[autodoc]] FlaxBertForTokenClassification
- __call__
## FlaxBertForQuestionAnswering
[[autodoc]] FlaxBertForQuestionAnswering
- __call__
</jax>
</frameworkcontent> | 0 |
hf_public_repos/transformers/docs/source/ja | hf_public_repos/transformers/docs/source/ja/model_doc/bert-japanese.md | <!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# BertJapanese
## Overview
BERT ã¢ãã«ã¯æ¥æ¬èªããã¹ãã§ãã¬ãŒãã³ã°ãããŸããã
2 ã€ã®ç°ãªãããŒã¯ã³åæ¹æ³ãåããã¢ãã«ããããŸãã
- MeCab ãš WordPiece ã䜿çšããŠããŒã¯ã³åããŸããããã«ã¯ã[MeCab](https://taku910.github.io/mecab/) ã®ã©ãããŒã§ãã [fugashi](https://github.com/polm/fugashi) ãšãã远å ã®äŸåé¢ä¿ãå¿
èŠã§ãã
- æåã«ããŒã¯ã³åããŸãã
*MecabTokenizer* ã䜿çšããã«ã¯ã`pip installTransformers["ja"]` (ãŸãã¯ãã€ã³ã¹ããŒã«ããå Žå㯠`pip install -e .["ja"]`) ããå¿
èŠããããŸãã
ãœãŒã¹ããïŒäŸåé¢ä¿ãã€ã³ã¹ããŒã«ããŸãã
[cl-tohakuãªããžããªã®è©³çް](https://github.com/cl-tohaku/bert-japanese)ãåç
§ããŠãã ããã
MeCab ããã³ WordPiece ããŒã¯ã³åã§ã¢ãã«ã䜿çšããäŸ:
```python
>>> import torch
>>> from transformers import AutoModel, AutoTokenizer
>>> bertjapanese = AutoModel.from_pretrained("cl-tohoku/bert-base-japanese")
>>> tokenizer = AutoTokenizer.from_pretrained("cl-tohoku/bert-base-japanese")
>>> ## Input Japanese Text
>>> line = "åŸèŒ©ã¯ç«ã§ããã"
>>> inputs = tokenizer(line, return_tensors="pt")
>>> print(tokenizer.decode(inputs["input_ids"][0]))
[CLS] åŸèŒ© ã¯ ç« ã§ ãã ã [SEP]
>>> outputs = bertjapanese(**inputs)
```
æåããŒã¯ã³åã䜿çšããã¢ãã«ã®äœ¿çšäŸ:
```python
>>> bertjapanese = AutoModel.from_pretrained("cl-tohoku/bert-base-japanese-char")
>>> tokenizer = AutoTokenizer.from_pretrained("cl-tohoku/bert-base-japanese-char")
>>> ## Input Japanese Text
>>> line = "åŸèŒ©ã¯ç«ã§ããã"
>>> inputs = tokenizer(line, return_tensors="pt")
>>> print(tokenizer.decode(inputs["input_ids"][0]))
[CLS] åŸ èŒ© ã¯ ç« ã§ ã ã ã [SEP]
>>> outputs = bertjapanese(**inputs)
```
<Tip>
- ãã®å®è£
ã¯ããŒã¯ã³åæ¹æ³ãé€ã㊠BERT ãšåãã§ãããã®ä»ã®äœ¿çšäŸã«ã€ããŠã¯ã[BERT ã®ããã¥ã¡ã³ã](bert) ãåç
§ããŠãã ããã
</Tip>
ãã®ã¢ãã«ã¯[cl-tohaku](https://huggingface.co/cl-tohaku)ããæäŸãããŸããã
## BertJapaneseTokenizer
[[autodoc]] BertJapaneseTokenizer
| 0 |
hf_public_repos/transformers/docs/source/ja | hf_public_repos/transformers/docs/source/ja/model_doc/align.md | <!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# ALIGN
## æŠèŠ
ALIGNã¢ãã«ã¯ãã[Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision](https://arxiv.org/abs/2102.05918)ããšããè«æã§Chao JiaãYinfei YangãYe XiaãYi-Ting ChenãZarana ParekhãHieu PhamãQuoc V. LeãYunhsuan SungãZhen LiãTom Duerigã«ãã£ãŠææ¡ãããŸãããALIGNã¯ãã«ãã¢ãŒãã«ãªèŠèŠèšèªã¢ãã«ã§ããããã¯ç»åãšããã¹ãã®é¡äŒŒåºŠãããŒãã·ã§ããç»ååé¡ã«äœ¿çšã§ããŸããALIGNã¯[EfficientNet](efficientnet)ãèŠèŠãšã³ã³ãŒããŒãšããŠã[BERT](bert)ãããã¹ããšã³ã³ãŒããŒãšããŠæèŒãããã¥ã¢ã«ãšã³ã³ãŒããŒæ§é ãç¹åŸŽãšãã察ç
§åŠç¿ã«ãã£ãŠèŠèŠãšããã¹ãã®è¡šçŸãæŽåãããããšãåŠã³ãŸãããããŸã§ã®ç ç©¶ãšã¯ç°ãªããALIGNã¯å·šå€§ã§ãã€ãžãŒãªããŒã¿ã»ãããæŽ»çšããã³ãŒãã¹ã®ã¹ã±ãŒã«ãå©çšããŠåçŽãªæ¹æ³ãªããæå
端ã®è¡šçŸãéæã§ããããšã瀺ããŠããŸãã
è«æã®èŠæšã¯ä»¥äžã®éãã§ãïŒ
*äºååŠç¿ããã衚çŸã¯ãå€ãã®èªç¶èšèªåŠçïŒNLPïŒããã³ç¥èŠã¿ã¹ã¯ã«ãšã£ãŠéèŠã«ãªã£ãŠããŸããNLPã«ããã衚çŸåŠç¿ã¯ã人éã®ã¢ãããŒã·ã§ã³ã®ãªãçã®ããã¹ãã§ã®åŠç¿ãžãšç§»è¡ããŠããŸãããèŠèŠããã³èŠèŠèšèªã®è¡šçŸã¯äŸç¶ãšããŠç²Ÿå·§ãªåŠç¿ããŒã¿ã»ããã«å€§ããäŸåããŠãããããã¯é«äŸ¡ã§ãã£ããå°éç¥èãå¿
èŠãšãããããŸããèŠèŠã¢ããªã±ãŒã·ã§ã³ã®å ŽåãImageNetãOpenImagesã®ãããªæç€ºçãªã¯ã©ã¹ã©ãã«ãæã€ããŒã¿ã»ããã䜿çšããŠåŠç¿ãããããšãã»ãšãã©ã§ããèŠèŠèšèªã®å ŽåãConceptual CaptionsãMSCOCOãCLIPãªã©ã®äººæ°ã®ããããŒã¿ã»ããã¯ãã¹ãŠãããããç¡èŠã§ããªãããŒã¿åéïŒããã³ã¯ãªãŒãã³ã°ïŒããã»ã¹ãå«ã¿ãŸãããã®ã³ã¹ãã®ããããã¥ã¬ãŒã·ã§ã³ããã»ã¹ã¯ããŒã¿ã»ããã®ãµã€ãºãå¶éããèšç·Žãããã¢ãã«ã®ã¹ã±ãŒãªã³ã°ã劚ããŸããæ¬è«æã§ã¯ãConceptual CaptionsããŒã¿ã»ããã®é«äŸ¡ãªãã£ã«ã¿ãªã³ã°ãåŸåŠçã¹ããããªãã§åŸãããã10åãè¶
ããç»åalt-textãã¢ã®ãã€ãºã®å€ãããŒã¿ã»ãããæŽ»çšããŸããã·ã³ãã«ãªãã¥ã¢ã«ãšã³ã³ãŒããŒã¢ãŒããã¯ãã£ã¯ã察ç
§æå€±ã䜿çšããŠç»åãšããã¹ããã¢ã®èŠèŠçããã³èšèªç衚çŸãæŽåãããããšãåŠç¿ããŸããæã
ã¯ãã³ãŒãã¹ã®èŠæš¡ããã®ãã€ãºãè£ãããã®ãããªåçŽãªåŠç¿ã¹ããŒã ã§ãæå
端ã®è¡šçŸã«ã€ãªããããšã瀺ããŸããæã
ã®èŠèŠè¡šçŸã¯ãImageNetãVTABãªã©ã®åé¡ã¿ã¹ã¯ãžã®è»¢ç§»ã«ãããŠåŒ·åãªæ§èœãçºæ®ããŸããæŽåããèŠèŠçããã³èšèªç衚çŸã¯ããŒãã·ã§ããç»ååé¡ãå¯èœã«ãããŸããããæŽç·Žãããã¯ãã¹ã¢ãã³ã·ã§ã³ã¢ãã«ãšæ¯èŒããŠããFlickr30Kããã³MSCOCOç»åããã¹ãæ€çŽ¢ãã³ãããŒã¯ã«ãããŠæ°ããªæå
端ã®çµæãéæããŸãããŸãããããã®è¡šçŸã¯ãè€éãªããã¹ãããã³ããã¹ã+ç»åã®ã¯ãšãªãçšããã¯ãã¹ã¢ãŒãã«æ€çŽ¢ãå¯èœã«ããŸãã*
ãã®ã¢ãã«ã¯[Alara Dirik](https://huggingface.co/adirik)ã«ããæäŸãããŸããã
ãªãªãžãã«ã®ã³ãŒãã¯å
¬éãããŠãããããã®å®è£
ã¯å
è«æã«åºã¥ããKakao Brainã®å®è£
ãããŒã¹ã«ããŠããŸãã
## 䜿çšäŸ
ALIGNã¯EfficientNetã䜿çšããŠèŠèŠçç¹åŸŽããBERTã䜿çšããŠããã¹ãç¹åŸŽãååŸããŸããããã¹ããšèŠèŠã®äž¡æ¹ã®ç¹åŸŽã¯ãåäžã®æ¬¡å
ãæã€æœåšç©ºéã«å°åœ±ãããŸããå°åœ±ãããç»åãšããã¹ãç¹åŸŽéã®ãããç©ãé¡äŒŒåºŠã¹ã³ã¢ãšããŠäœ¿çšãããŸãã
[`AlignProcessor`]ã¯ãããã¹ãã®ãšã³ã³ãŒããšç»åã®ååŠçãäž¡æ¹è¡ãããã«ã[`EfficientNetImageProcessor`]ãš[`BertTokenizer`]ãåäžã®ã€ã³ã¹ã¿ã³ã¹ã«ã©ããããŸãã以äžã®äŸã¯ã[`AlignProcessor`]ãš[`AlignModel`]ã䜿çšããŠç»å-ããã¹ãé¡äŒŒåºŠã¹ã³ã¢ãååŸããæ¹æ³ã瀺ããŠããŸãã
```python
import requests
import torch
from PIL import Image
from transformers import AlignProcessor, AlignModel
processor = AlignProcessor.from_pretrained("kakaobrain/align-base")
model = AlignModel.from_pretrained("kakaobrain/align-base")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
candidate_labels = ["an image of a cat", "an image of a dog"]
inputs = processor(text=candidate_labels, images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
# this is the image-text similarity score
logits_per_image = outputs.logits_per_image
# we can take the softmax to get the label probabilities
probs = logits_per_image.softmax(dim=1)
print(probs)
```
## åèè³æ
ALIGNã®äœ¿çšãéå§ããã®ã«åœ¹ç«ã€å
¬åŒã®Hugging Faceãšã³ãã¥ããã£ïŒðã§ç€ºãããŠããïŒã®åèè³æã®äžèЧã§ãã
- [ALIGNãšCOYO-700MããŒã¿ã»ãã](https://huggingface.co/blog/vit-align)ã«é¢ããããã°æçš¿ã
- ãŒãã·ã§ããç»ååé¡[ãã¢](https://huggingface.co/spaces/adirik/ALIGN-zero-shot-image-classification)ã
- `kakaobrain/align-base` ã¢ãã«ã®[ã¢ãã«ã«ãŒã](https://huggingface.co/kakaobrain/align-base)ã
ããã«åèè³æãæåºãããå Žåã¯ãæ°å
ŒããªãPull RequestãéããŠãã ãããç§ãã¡ã¯ãããã¬ãã¥ãŒããããŸãïŒåèè³æã¯ãæ¢åã®ãã®ãè€è£œããã®ã§ã¯ãªããäœãæ°ããããšã瀺ãããšãçæ³çã§ãã
## AlignConfig
[[autodoc]] AlignConfig
- from_text_vision_configs
## AlignTextConfig
[[autodoc]] AlignTextConfig
## AlignVisionConfig
[[autodoc]] AlignVisionConfig
## AlignProcessor
[[autodoc]] AlignProcessor
## AlignModel
[[autodoc]] AlignModel
- forward
- get_text_features
- get_image_features
## AlignTextModel
[[autodoc]] AlignTextModel
- forward
## AlignVisionModel
[[autodoc]] AlignVisionModel
- forward
| 0 |
hf_public_repos/transformers/docs/source/ja | hf_public_repos/transformers/docs/source/ja/model_doc/beit.md | <!--Copyright 2021 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# BEiT
## Overview
BEiT ã¢ãã«ã¯ã[BEiT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) ã§ææ¡ãããŸããã
ãã³ãã»ããªããªãŒã»ãã³ããã«ã»ãŠã§ã€ã BERT ã«è§Šçºããã BEiT ã¯ãèªå·±æåž«ããã®äºåãã¬ãŒãã³ã°ãäœæããæåã®è«æã§ãã
ããžã§ã³ ãã©ã³ã¹ãã©ãŒã㌠(ViT) ã¯ãæåž«ä»ãäºåãã¬ãŒãã³ã°ãããåªããããã©ãŒãã³ã¹ãçºæ®ããŸããã¯ã©ã¹ãäºæž¬ããããã«ã¢ãã«ãäºåãã¬ãŒãã³ã°ããã®ã§ã¯ãªã
([ãªãªãžãã«ã® ViT è«æ](https://arxiv.org/abs/2010.11929) ã§è¡ãããããã«) ç»åã® BEiT ã¢ãã«ã¯ã次ã®ããã«äºåãã¬ãŒãã³ã°ãããŠããŸãã
ãã¹ã¯ããã OpenAI ã® [DALL-E ã¢ãã«](https://arxiv.org/abs/2102.12092) ã®ã³ãŒãããã¯ããããžã¥ã¢ã« ããŒã¯ã³ãäºæž¬ããŸã
ãããã
è«æã®èŠçŽã¯æ¬¡ã®ãšããã§ãã
*èªå·±æåž«ããèŠèŠè¡šçŸã¢ãã« BEiT (Bidirectional Encoderpresentation) ãå°å
¥ããŸãã
ã€ã¡ãŒãžãã©ã³ã¹ãã©ãŒããŒãããèªç¶èšèªåŠçåéã§éçºãããBERTã«å£ãããã¹ã¯ç»åãææ¡ããŸãã
ããžã§ã³ãã©ã³ã¹ãã©ãŒããŒãäºåã«ãã¬ãŒãã³ã°ããããã®ã¢ããªã³ã°ã¿ã¹ã¯ãå
·äœçã«ã¯ãäºåãã¬ãŒãã³ã°ã§ã¯åç»åã« 2 ã€ã®ãã¥ãŒããããŸãã
ããã (16x16 ãã¯ã»ã«ãªã©)ãããã³ããžã¥ã¢ã« ããŒã¯ã³ (ã€ãŸããåå¥ã®ããŒã¯ã³)ããŸããå
ã®ç»åããããŒã¯ã³åãããŠã
ããžã¥ã¢ã«ããŒã¯ã³ã次ã«ãããã€ãã®ç»åããããã©ã³ãã ã«ãã¹ã¯ããããããããã¯ããŒã³ã® Transformer ã«äŸçµŠããŸããäºåãã¬ãŒãã³ã°
ç®çã¯ãç Žæããã€ã¡ãŒãž ãããã«åºã¥ããŠå
ã®ããžã¥ã¢ã« ããŒã¯ã³ãå埩ããããšã§ãã BEiTã®äºåãã¬ãŒãã³ã°åŸã
äºåãã¬ãŒãã³ã°ããããšã³ã³ãŒããŒã«ã¿ã¹ã¯ ã¬ã€ã€ãŒã远å ããããšã§ãããŠã³ã¹ããªãŒã ã¿ã¹ã¯ã®ã¢ãã« ãã©ã¡ãŒã¿ãŒãçŽæ¥åŸ®èª¿æŽããŸãã
ç»ååé¡ãšã»ãã³ãã£ãã¯ã»ã°ã¡ã³ããŒã·ã§ã³ã«é¢ããå®éšçµæã¯ãç§ãã¡ã®ã¢ãã«ãç«¶äºåã®ããçµæãéæããããšã瀺ããŠããŸã
以åã®äºåãã¬ãŒãã³ã°æ¹æ³ã䜿çšããŠãããšãã°ãåºæ¬ãµã€ãºã® BEiT ã¯ãImageNet-1K ã§ 83.2% ã®ããã 1 粟床ãéæããŸãã
åãèšå®ã§ãŒãããã® DeiT ãã¬ãŒãã³ã° (81.8%) ã倧å¹
ã«äžåããŸããããŸãã倧åBEiTã¯
86.3% 㯠ImageNet-1K ã®ã¿ã䜿çšããŠãããImageNet-22K ã§ã®æåž«ä»ãäºåãã¬ãŒãã³ã°ã䜿çšãã ViT-L (85.2%) ãäžåã£ãŠããŸãã*
## Usage tips
- BEiT ã¢ãã«ã¯éåžžã®ããžã§ã³ ãã©ã³ã¹ãã©ãŒããŒã§ãããæåž«ããã§ã¯ãªãèªå·±æåž«ããã®æ¹æ³ã§äºåãã¬ãŒãã³ã°ãããŠããŸãã圌ãã¯
ImageNet-1K ããã³ CIFAR-100 ã§åŸ®èª¿æŽãããšã[ãªãªãžãã« ã¢ãã« (ViT)](vit) ãš [ããŒã¿å¹çã®é«ãã€ã¡ãŒãž ãã©ã³ã¹ãã©ãŒã㌠(DeiT)](deit) ã®äž¡æ¹ãäžåãããã©ãŒãã³ã¹ãçºæ®ããŸããæšè«ã«é¢ãããã¢ããŒãããã¯ããã§ãã¯ã§ããŸãã
ã«ã¹ã¿ã ããŒã¿ã®åŸ®èª¿æŽã¯ [ãã¡ã](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/VisionTransformer) (眮ãæããã ãã§æžã¿ãŸã)
[`BeitImageProcessor`] ã«ãã [`ViTFeatureExtractor`] ãš
[`ViTForImageClassification`] by [`BeitForImageClassification`])ã
- DALL-E ã®ç»åããŒã¯ãã€ã¶ãŒãš BEiT ãçµã¿åãããæ¹æ³ã玹ä»ããã㢠ããŒãããã¯ãå©çšå¯èœã§ãã
ãã¹ã¯ãããç»åã¢ããªã³ã°ãå®è¡ããŸãã [ãã](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/BEiT) ã§èŠã€ããããšãã§ããŸãã
- BEiT ã¢ãã«ã¯åç»åãåããµã€ãº (è§£å床) ã§ããããšãæåŸ
ããŠãããããæ¬¡ã®ããã«äœ¿çšã§ããŸãã
[`BeitImageProcessor`] ã䜿çšããŠãã¢ãã«ã®ç»åã®ãµã€ãºãå€æŽ (ãŸãã¯åã¹ã±ãŒã«) ããæ£èŠåããŸãã
- äºåãã¬ãŒãã³ã°ãŸãã¯åŸ®èª¿æŽäžã«äœ¿çšããããããè§£å床ãšç»åè§£å床ã®äž¡æ¹ãååã«åæ ãããŸãã
åãã§ãã¯ãã€ã³ããããšãã°ã`microsoft/beit-base-patch16-224`ã¯ããããä»ãã®åºæ¬ãµã€ãºã®ã¢ãŒããã¯ãã£ãæããŸãã
è§£å床㯠16x16ã埮調æŽè§£å床㯠224x224 ã§ãããã¹ãŠã®ãã§ãã¯ãã€ã³ã㯠[ãã](https://huggingface.co/models?search=microsoft/beit) ã§èŠã€ããããšãã§ããŸãã
- å©çšå¯èœãªãã§ãã¯ãã€ã³ãã¯ã(1) [ImageNet-22k](http://www.image-net.org/) ã§äºåãã¬ãŒãã³ã°ãããŠããŸã (
1,400 äžã®ç»åãš 22,000 ã®ã¯ã©ã¹) ã®ã¿ã(2) ImageNet-22k ã§ã埮調æŽããŸã㯠(3) [ImageNet-1k](http://www.image-net.org/challenges/LSVRC)ã§ã埮調æŽ/2012/) (ILSVRC 2012 ãšãåŒã°ãã130 äžä»¶ã®ã³ã¬ã¯ã·ã§ã³)
ç»åãš 1,000 ã¯ã©ã¹)ã
- BEiT ã¯ãT5 ã¢ãã«ããã€ã³ã¹ãã¬ãŒã·ã§ã³ãåŸãçžå¯Ÿäœçœ®åã蟌ã¿ã䜿çšããŸããäºåãã¬ãŒãã³ã°äžã«ãèè
ã¯æ¬¡ã®ããšãå
±æããŸããã
ããã€ãã®èªå·±æ³šæå±€éã®çžå¯Ÿçãªäœçœ®ã®åãã埮調æŽäžãåã¬ã€ã€ãŒã®çžå¯Ÿäœçœ®
ãã€ã¢ã¹ã¯ãäºåãã¬ãŒãã³ã°åŸã«ååŸãããå
±æçžå¯Ÿäœçœ®ãã€ã¢ã¹ã§åæåãããŸãããåžæã®å Žåã¯ã
ã¢ãã«ãæåããäºåãã¬ãŒãã³ã°ããã«ã¯ã`use_relative_position_bias` ãŸãã¯
远å ããã«ã¯ã[`BeitConfig`] ã® `use_relative_position_bias` 屿§ã `True` ã«èšå®ããŸãã
äœçœ®ã®åã蟌ã¿ã
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/beit_architecture.jpg"
alt="drawing" width="600"/>
<small> BEiT ã®äºåãã¬ãŒãã³ã°ã <a href="https://arxiv.org/abs/2106.08254">å
ã®è«æããæç²ã</a> </small>
ãã®ã¢ãã«ã¯ã[nielsr](https://huggingface.co/nielsr) ã«ãã£ãŠæäŸãããŸããããã®ã¢ãã«ã® JAX/FLAX ããŒãžã§ã³ã¯ã
[kamalkraj](https://huggingface.co/kamalkraj) ã«ããæçš¿ãå
ã®ã³ãŒã㯠[ãã](https://github.com/microsoft/unilm/tree/master/beit) ã«ãããŸãã
## Resources
BEiT ã®äœ¿çšãéå§ããã®ã«åœ¹ç«ã€å
¬åŒ Hugging Face ããã³ã³ãã¥ãã㣠(ð ã§ç€ºãããŠãã) ãªãœãŒã¹ã®ãªã¹ãã
<PipelineTag pipeline="image-classification"/>
- [`BeitForImageClassification`] ã¯ããã® [ãµã³ãã« ã¹ã¯ãªãã](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) ããã³ [ããŒãããã¯](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb)ã
- åç
§: [ç»ååé¡ã¿ã¹ã¯ ã¬ã€ã](../tasks/image_classification)
**ã»ãã³ãã£ã㯠ã»ã°ã¡ã³ããŒã·ã§ã³**
- [ã»ãã³ãã£ã㯠ã»ã°ã¡ã³ããŒã·ã§ã³ ã¿ã¹ã¯ ã¬ã€ã](../tasks/semantic_segmentation)
ããã«å«ãããªãœãŒã¹ã®éä¿¡ã«èå³ãããå Žåã¯ããæ°è»œã«ãã« ãªã¯ãšã¹ããéããŠãã ããã審æ»ãããŠããã ããŸãããªãœãŒã¹ã¯ãæ¢åã®ãªãœãŒã¹ãè€è£œããã®ã§ã¯ãªããäœãæ°ãããã®ã瀺ãããšãçæ³çã§ãã
## BEiT specific outputs
[[autodoc]] models.beit.modeling_beit.BeitModelOutputWithPooling
[[autodoc]] models.beit.modeling_flax_beit.FlaxBeitModelOutputWithPooling
## BeitConfig
[[autodoc]] BeitConfig
## BeitFeatureExtractor
[[autodoc]] BeitFeatureExtractor
- __call__
- post_process_semantic_segmentation
## BeitImageProcessor
[[autodoc]] BeitImageProcessor
- preprocess
- post_process_semantic_segmentation
## BeitModel
[[autodoc]] BeitModel
- forward
## BeitForMaskedImageModeling
[[autodoc]] BeitForMaskedImageModeling
- forward
## BeitForImageClassification
[[autodoc]] BeitForImageClassification
- forward
## BeitForSemanticSegmentation
[[autodoc]] BeitForSemanticSegmentation
- forward
## FlaxBeitModel
[[autodoc]] FlaxBeitModel
- __call__
## FlaxBeitForMaskedImageModeling
[[autodoc]] FlaxBeitForMaskedImageModeling
- __call__
## FlaxBeitForImageClassification
[[autodoc]] FlaxBeitForImageClassification
- __call__
| 0 |
hf_public_repos/transformers/docs/source/ja | hf_public_repos/transformers/docs/source/ja/model_doc/altclip.md | <!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# AltCLIP
## æŠèŠ
AltCLIPã¢ãã«ã¯ãã[AltCLIP: Altering the Language Encoder in CLIP for Extended Language Capabilities](https://arxiv.org/abs/2211.06679v2)ããšããè«æã§Zhongzhi ChenãGuang LiuãBo-Wen ZhangãFulong YeãQinghong YangãLedell Wuã«ãã£ãŠææ¡ãããŸãããAltCLIPïŒCLIPã®èšèªãšã³ã³ãŒããŒã®ä»£æ¿ïŒã¯ãæ§ã
ãªç»å-ããã¹ããã¢ããã³ããã¹ã-ããã¹ããã¢ã§ãã¬ãŒãã³ã°ããããã¥ãŒã©ã«ãããã¯ãŒã¯ã§ããCLIPã®ããã¹ããšã³ã³ãŒããŒãäºååŠç¿æžã¿ã®å€èšèªããã¹ããšã³ã³ãŒããŒXLM-Rã«çœ®ãæããããšã§ãã»ãŒå
šãŠã®ã¿ã¹ã¯ã§CLIPã«éåžžã«è¿ãæ§èœãåŸããããªãªãžãã«ã®CLIPã®èœåãå€èšèªçè§£ãªã©ã«æ¡åŒµããŸããã
è«æã®èŠæšã¯ä»¥äžã®éãã§ãïŒ
*ãã®ç ç©¶ã§ã¯ã匷åãªãã€ãªã³ã¬ã«ãã«ãã¢ãŒãã«è¡šçŸã¢ãã«ãèšç·Žããããã®æŠå¿µçã«åçŽã§å¹æçãªæ¹æ³ãææ¡ããŸããOpenAIã«ãã£ãŠãªãªãŒã¹ããããã«ãã¢ãŒãã«è¡šçŸã¢ãã«CLIPããéå§ãããã®ããã¹ããšã³ã³ãŒããäºååŠç¿æžã¿ã®å€èšèªããã¹ããšã³ã³ãŒãXLM-Rã«äº€æããæåž«åŠç¿ãšå¯Ÿç
§åŠç¿ãããªã2段éã®ãã¬ãŒãã³ã°ã¹ããŒããçšããŠèšèªãšç»åã®è¡šçŸãæŽåãããŸãããå¹
åºãã¿ã¹ã¯ã®è©äŸ¡ãéããŠãæã
ã®æ¹æ³ãæ€èšŒããŸããImageNet-CNãFlicker30k-CNãCOCO-CNãå«ãå€ãã®ã¿ã¹ã¯ã§æ°ããªæå
ç«¯ã®æ§èœãéæããŸãããããã«ãã»ãŒãã¹ãŠã®ã¿ã¹ã¯ã§CLIPã«éåžžã«è¿ãæ§èœãåŸãŠãããããã¯CLIPã®ããã¹ããšã³ã³ãŒãã倿Žããã ãã§ãå€èšèªçè§£ãªã©ã®æ¡åŒµãå®çŸã§ããããšã瀺åããŠããŸãã*
ãã®ã¢ãã«ã¯[jongjyh](https://huggingface.co/jongjyh)ã«ããæäŸãããŸããã
## 䜿çšäžã®ãã³ããšäœ¿çšäŸ
AltCLIPã®äœ¿ç𿹿³ã¯CLIPã«éåžžã«äŒŒãŠããŸããCLIPãšã®éãã¯ããã¹ããšã³ã³ãŒããŒã«ãããŸããç§ãã¡ã¯ã«ãžã¥ã¢ã«ã¢ãã³ã·ã§ã³ã§ã¯ãªãåæ¹åã¢ãã³ã·ã§ã³ã䜿çšããXLM-Rã®[CLS]ããŒã¯ã³ãããã¹ãåã蟌ã¿ã衚ããã®ãšããŠåãããšã«çæããŠãã ããã
AltCLIPã¯ãã«ãã¢ãŒãã«ãªèŠèŠèšèªã¢ãã«ã§ããããã¯ç»åãšããã¹ãã®é¡äŒŒåºŠãããŒãã·ã§ããç»ååé¡ã«äœ¿çšã§ããŸããAltCLIPã¯ViTã®ãããªTransformerã䜿çšããŠèŠèŠçç¹åŸŽããåæ¹åèšèªã¢ãã«ã䜿çšããŠããã¹ãç¹åŸŽãååŸããŸããããã¹ããšèŠèŠã®äž¡æ¹ã®ç¹åŸŽã¯ãåäžã®æ¬¡å
ãæã€æœåšç©ºéã«å°åœ±ãããŸããå°åœ±ãããç»åãšããã¹ãç¹åŸŽéã®ãããç©ãé¡äŒŒåºŠã¹ã³ã¢ãšããŠäœ¿çšãããŸãã
Transformerãšã³ã³ãŒããŒã«ç»åãäžããã«ã¯ãåç»åãåºå®ãµã€ãºã®éè€ããªããããã®ç³»åã«åå²ããããããç·åœ¢ã«åã蟌ã¿ãŸããç»åå
šäœã衚çŸããããã®[CLS]ããŒã¯ã³ã远å ãããŸããèè
ã¯çµ¶å¯Ÿäœçœ®åã蟌ã¿ã远å ããçµæãšããŠåŸããããã¯ãã«ã®ç³»åãæšæºçãªTransformerãšã³ã³ãŒããŒã«äŸçµŠããŸãã[`CLIPImageProcessor`]ã䜿çšããŠãã¢ãã«ã®ããã«ç»åã®ãµã€ãºå€æŽïŒãŸãã¯æ¡å€§çž®å°ïŒãšæ£èŠåãè¡ãããšãã§ããŸãã
[`AltCLIPProcessor`]ã¯ãããã¹ãã®ãšã³ã³ãŒããšç»åã®ååŠçãäž¡æ¹è¡ãããã«ã[`CLIPImageProcessor`]ãš[`XLMRobertaTokenizer`]ãåäžã®ã€ã³ã¹ã¿ã³ã¹ã«ã©ããããŸãã以äžã®äŸã¯ã[`AltCLIPProcessor`]ãš[`AltCLIPModel`]ã䜿çšããŠç»å-ããã¹ãé¡äŒŒã¹ã³ã¢ãååŸããæ¹æ³ã瀺ããŠããŸãã
```python
>>> from PIL import Image
>>> import requests
>>> from transformers import AltCLIPModel, AltCLIPProcessor
>>> model = AltCLIPModel.from_pretrained("BAAI/AltCLIP")
>>> processor = AltCLIPProcessor.from_pretrained("BAAI/AltCLIP")
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> inputs = processor(text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True)
>>> outputs = model(**inputs)
>>> logits_per_image = outputs.logits_per_image # this is the image-text similarity score
>>> probs = logits_per_image.softmax(dim=1) # we can take the softmax to get the label probabilities
```
<Tip>
ãã®ã¢ãã«ã¯`CLIPModel`ãããŒã¹ã«ããŠããããªãªãžãã«ã®[CLIP](clip)ãšåãããã«äœ¿çšããŠãã ããã
</Tip>
## AltCLIPConfig
[[autodoc]] AltCLIPConfig
- from_text_vision_configs
## AltCLIPTextConfig
[[autodoc]] AltCLIPTextConfig
## AltCLIPVisionConfig
[[autodoc]] AltCLIPVisionConfig
## AltCLIPProcessor
[[autodoc]] AltCLIPProcessor
## AltCLIPModel
[[autodoc]] AltCLIPModel
- forward
- get_text_features
- get_image_features
## AltCLIPTextModel
[[autodoc]] AltCLIPTextModel
- forward
## AltCLIPVisionModel
[[autodoc]] AltCLIPVisionModel
- forward
| 0 |
hf_public_repos/transformers/docs/source/ja | hf_public_repos/transformers/docs/source/ja/model_doc/bros.md | <!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# BROS
## Overview
BROS ã¢ãã«ã¯ãTeakgyu HonãDonghyun KimãMingi Ji, Wonseok Hwang, Daehyun Nam, Sungrae Park ã«ãã£ãŠ [BROS: A Pre-trained Language Model Focusing on Text and Layout for Better Key Information Extraction from Documents](https://arxiv.org/abs/2108.04539) ã§ææ¡ãããŸããã
BROS 㯠*BERT Relying On Spatality* ã®ç¥ã§ããããã¯ãäžé£ã®ããŒã¯ã³ãšãã®å¢çããã¯ã¹ãå
¥åãšããŠåãåããäžé£ã®é ãç¶æ
ãåºåãããšã³ã³ãŒããŒå°çšã® Transformer ã¢ãã«ã§ãã BROS ã¯ã絶察çãªç©ºéæ
å ±ã䜿çšãã代ããã«ãçžå¯Ÿçãªç©ºéæ
å ±ããšã³ã³ãŒãããŸãã
BERT ã§äœ¿çšãããããŒã¯ã³ãã¹ã¯èšèªã¢ããªã³ã°ç®æš (TMLM) ãšæ°ãããšãªã¢ãã¹ã¯èšèªã¢ããªã³ã°ç®æš (AMLM) ã® 2 ã€ã®ç®æšã§äºåãã¬ãŒãã³ã°ãããŠããŸãã
TMLM ã§ã¯ãããŒã¯ã³ã¯ã©ã³ãã ã«ãã¹ã¯ãããã¢ãã«ã¯ç©ºéæ
å ±ãšä»ã®ãã¹ã¯ãããŠããªãããŒã¯ã³ã䜿çšããŠãã¹ã¯ãããããŒã¯ã³ãäºæž¬ããŸãã
AMLM 㯠TMLM ã® 2D ããŒãžã§ã³ã§ããããã¹ã ããŒã¯ã³ãã©ã³ãã ã«ãã¹ã¯ããTMLM ãšåãæ
å ±ã§äºæž¬ããŸãããããã¹ã ããã㯠(é å) ããã¹ã¯ããŸãã
`BrosForTokenClassification`ã«ã¯ãBrosModel ã®äžã«åçŽãªç·åœ¢å±€ããããŸããåããŒã¯ã³ã®ã©ãã«ãäºæž¬ããŸãã
`BrosSpadeEEForTokenClassification`ã«ã¯ãBrosModel ã®äžã«`initial_token_classifier`ãš`subsequent_token_classifier`ããããŸãã `initial_token_classifier` ã¯åãšã³ãã£ãã£ã®æåã®ããŒã¯ã³ãäºæž¬ããããã«äœ¿çšããã`subsequent_token_classifier` ã¯ãšã³ãã£ãã£å
ã®æ¬¡ã®ããŒã¯ã³ãäºæž¬ããããã«äœ¿çšãããŸãã `BrosSpadeELForTokenClassification`ã«ã¯ BrosModel ã®äžã«`entity_linker`ããããŸãã `entity_linker` 㯠2 ã€ã®ãšã³ãã£ãã£éã®é¢ä¿ãäºæž¬ããããã«äœ¿çšãããŸãã
`BrosForTokenClassification`ãš`BrosSpadeEEForTokenClassification`ã¯åºæ¬çã«åããžã§ããå®è¡ããŸãããã ãã`BrosForTokenClassification`ã¯å
¥åããŒã¯ã³ãå®å
šã«ã·ãªã¢ã«åãããŠããããšãåæãšããŠããŸã (ããŒã¯ã³ã¯ 2D 空éã«ååšãããããããã¯éåžžã«å°é£ãªäœæ¥ã§ã)ãäžæ¹ã`BrosSpadeEEForTokenClassification`㯠1 ã€ã®ããŒã¯ã³ããæ¬¡ã®æ¥ç¶ããŒã¯ã³ãäºæž¬ãããããã·ãªã¢ã«åãšã©ãŒã®åŠçãããæè»ã«è¡ãããšãã§ããŸãã
`BrosSpadeELForTokenClassification` ã¯ãšã³ãã£ãã£å
ã®ãªã³ã¯ ã¿ã¹ã¯ãå®è¡ããŸããããã 2 ã€ã®ãšã³ãã£ãã£ãäœããã®é¢ä¿ãå
±æããå Žåã(ãããšã³ãã£ãã£ã®) 1 ã€ã®ããŒã¯ã³ãã (å¥ã®ãšã³ãã£ãã£ã®) å¥ã®ããŒã¯ã³ãžã®é¢ä¿ãäºæž¬ããŸãã
BROS ã¯ãæç€ºçãªèŠèŠæ©èœã«äŸåããã«ãFUNSDãSROIEãCORDãSciTSR ãªã©ã® Key Information Extraction (KIE) ãã³ãããŒã¯ã§åç以äžã®çµæãéæããŸãã
è«æã®èŠçŽã¯æ¬¡ã®ãšããã§ãã
*ææžç»åããã®éèŠæ
å ±æœåº (KIE) ã«ã¯ã2 次å
(2D) 空éã«ãããããã¹ãã®æèçããã³ç©ºéçæå³è«ãçè§£ããå¿
èŠããããŸããæè¿ã®ç ç©¶ã®å€ãã¯ãææžç»åã®èŠèŠçç¹åŸŽãšããã¹ãããã³ãã®ã¬ã€ã¢ãŠããçµã¿åãããããšã«éç¹ã眮ããäºåãã¬ãŒãã³ã°æžã¿èšèªã¢ãã«ãéçºããããšã§ããã®èª²é¡ã解決ããããšããŠããŸããäžæ¹ããã®ããŒããŒã§ã¯ãããã¹ããšã¬ã€ã¢ãŠãã®å¹æçãªçµã¿åãããšããåºæ¬ã«ç«ã¡è¿ã£ãŠãã®åé¡ã«åãçµã¿ãŸããå
·äœçã«ã¯ãBROS (BERT Relying On Spatality) ãšããååã®äºåãã¬ãŒãã³ã°æžã¿èšèªã¢ãã«ãææ¡ããŸãããã®èšèªã¢ãã«ã¯ã2D 空éå
ã®ããã¹ãã®çžå¯Ÿäœçœ®ããšã³ã³ãŒããããšãªã¢ ãã¹ãã³ã°æŠç¥ã䜿çšããŠã©ãã«ã®ãªãããã¥ã¡ã³ãããåŠç¿ããŸãã 2D 空éå
ã®ããã¹ããçè§£ããããã®ãã®æé©åããããã¬ãŒãã³ã° ã¹ããŒã ã«ãããBROS ã¯ãèŠèŠçãªç¹åŸŽã«äŸåããããšãªãã4 ã€ã® KIE ãã³ãããŒã¯ (FUNSDãSROIE*ãCORDãããã³ SciTSR) ã§ä»¥åã®æ¹æ³ãšæ¯èŒããŠåç以äžã®ããã©ãŒãã³ã¹ã瀺ããŸããããŸãããã®è«æã§ã¯ãKIE ã¿ã¹ã¯ã«ããã 2 ã€ã®çŸå®äžçã®èª²é¡ ((1) ééã£ãããã¹ãé åºã«ãããšã©ãŒã®æå°åãããã³ (2) å°æ°ã®äžæµäŸããã®å¹ççãªåŠç¿) ãæããã«ãã以åã®æ¹æ³ã«å¯Ÿãã BROS ã®åªäœæ§ãå®èšŒããŸãã*
ãã®ã¢ãã«ã¯ [jinho8345](https://huggingface.co/jinho8345) ã«ãã£ãŠå¯çš¿ãããŸãããå
ã®ã³ãŒã㯠[ãã](https://github.com/clovaai/bros) ã«ãããŸãã
## Usage tips and examples
- [`~transformers.BrosModel.forward`] ã«ã¯ã`input_ids` ãš `bbox` (ããŠã³ãã£ã³ã° ããã¯ã¹) ãå¿
èŠã§ããåå¢çããã¯ã¹ã¯ã(x0ãy0ãx1ãy1) åœ¢åŒ (å·Šäžé
ãå³äžé
) ã§ããå¿
èŠããããŸããå¢çããã¯ã¹ã®ååŸã¯å€éš OCR ã·ã¹ãã ã«äŸåããŸãã ãxã座æšã¯ããã¥ã¡ã³ãç»åã®å¹
ã§æ£èŠåããå¿
èŠãããããyã座æšã¯ããã¥ã¡ã³ãç»åã®é«ãã§æ£èŠåããå¿
èŠããããŸãã
```python
def expand_and_normalize_bbox(bboxes, doc_width, doc_height):
# here, bboxes are numpy array
# Normalize bbox -> 0 ~ 1
bboxes[:, [0, 2]] = bboxes[:, [0, 2]] / width
bboxes[:, [1, 3]] = bboxes[:, [1, 3]] / height
```
- [`~transformers.BrosForTokenClassification.forward`ã`~transformers.BrosSpadeEEForTokenClassification.forward`ã`~transformers.BrosSpadeEEForTokenClassification.forward`] ã§ã¯ãæå€±èšç®ã« `input_ids` ãš `bbox` ã ãã§ãªã `box_first_token_mask` ãå¿
èŠã§ããããã¯ãåããã¯ã¹ã®å
é 以å€ã®ããŒã¯ã³ãé€å€ããããã®ãã¹ã¯ã§ãããã®ãã¹ã¯ã¯ãåèªãã `input_ids` ãäœæãããšãã«å¢çããã¯ã¹ã®éå§ããŒã¯ã³ ã€ã³ããã¯ã¹ãä¿åããããšã§ååŸã§ããŸããæ¬¡ã®ã³ãŒãã§`box_first_token_mask`ãäœæã§ããŸãã
```python
def make_box_first_token_mask(bboxes, words, tokenizer, max_seq_length=512):
box_first_token_mask = np.zeros(max_seq_length, dtype=np.bool_)
# encode(tokenize) each word from words (List[str])
input_ids_list: List[List[int]] = [tokenizer.encode(e, add_special_tokens=False) for e in words]
# get the length of each box
tokens_length_list: List[int] = [len(l) for l in input_ids_list]
box_end_token_indices = np.array(list(itertools.accumulate(tokens_length_list)))
box_start_token_indices = box_end_token_indices - np.array(tokens_length_list)
# filter out the indices that are out of max_seq_length
box_end_token_indices = box_end_token_indices[box_end_token_indices < max_seq_length - 1]
if len(box_start_token_indices) > len(box_end_token_indices):
box_start_token_indices = box_start_token_indices[: len(box_end_token_indices)]
# set box_start_token_indices to True
box_first_token_mask[box_start_token_indices] = True
return box_first_token_mask
```
## Resources
- ã㢠ã¹ã¯ãªãã㯠[ãã¡ã](https://github.com/clovaai/bros) ã«ãããŸãã
## BrosConfig
[[autodoc]] BrosConfig
## BrosProcessor
[[autodoc]] BrosProcessor
- __call__
## BrosModel
[[autodoc]] BrosModel
- forward
## BrosForTokenClassification
[[autodoc]] BrosForTokenClassification
- forward
## BrosSpadeEEForTokenClassification
[[autodoc]] BrosSpadeEEForTokenClassification
- forward
## BrosSpadeELForTokenClassification
[[autodoc]] BrosSpadeELForTokenClassification
- forward
| 0 |
hf_public_repos/transformers/docs/source/ja | hf_public_repos/transformers/docs/source/ja/model_doc/blenderbot.md | <!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Blenderbot
**å
責äºé
:** äœãå¥åŠãªãã®ãèŠã€ããå Žåã¯ã [Github Issue](https://github.com/huggingface/transformers/issues/new?assignees=&labels=&template=bug-report.md&title) ãå ±åããŠãã ããã
## Overview
Blender ãã£ããããã ã¢ãã«ã¯ã[Recipes for building an open-domain chatbot](https://arxiv.org/pdf/2004.13637.pdf) Stephen RollerãEmily DinanãNaman GoyalãDa JuãMary Williamsonãyinghan Liuãã§ææ¡ãããŸããã
ãžã³ã»ã·ã¥ãŒããã€ã«ã»ãªãããã«ãŒãã»ã·ã£ã¹ã¿ãŒããšãªãã¯ã»Mã»ã¹ãã¹ãY-ã©ã³ã»ããŒããŒããžã§ã€ãœã³ã»ãŠã§ã¹ãã³ã2020幎4æ30æ¥ã
è«æã®èŠæšã¯æ¬¡ã®ãšããã§ãã
*ãªãŒãã³ãã¡ã€ã³ã®ãã£ãããããã®æ§ç¯ã¯ãæ©æ¢°åŠç¿ç ç©¶ã«ãšã£ãŠé£ããåéã§ãããããŸã§ã®ç ç©¶ã§ã¯æ¬¡ã®ããšã瀺ãããŠããŸããã
ãã¥ãŒã©ã« ã¢ãã«ããã©ã¡ãŒã¿ãŒã®æ°ãšãã¬ãŒãã³ã°å¯Ÿè±¡ã®ããŒã¿ã®ãµã€ãºã§ã¹ã±ãŒãªã³ã°ãããšãçµæãåäžããŸãã
髿§èœã®ãã£ãããããã«ã¯ä»ã®èŠçŽ ãéèŠã§ããããšã瀺ããŸããè¯ãäŒè©±ã«ã¯å€ãã®ããšãå¿
èŠã§ã
äŒè©±ã®å°éå®¶ãã·ãŒã ã¬ã¹ã«èåããã¹ãã«: é
åçãªè©±ã®ãã€ã³ããæäŸãã話ãèã
äžè²«ããæ
床ãç¶æããªãããç¥èãå
±æãåæ§ãé©åã«è¡šçŸãã
ãã«ãœããé©åãªãã¬ãŒãã³ã° ããŒã¿ãšéžæãäžããããå Žåãå€§èŠæš¡ã¢ãã«ããããã®ã¹ãã«ãåŠç¿ã§ããããšã瀺ããŸãã
äžä»£æŠç¥ã 90Mã2.7Bã9.4B ãã©ã¡ãŒã¿ãŒ ã¢ãã«ã䜿çšããŠãããã®ã¬ã·ãã®ããªã¢ã³ããæ§ç¯ããã¢ãã«ãäœæããŸãã
ã³ãŒãã¯å
¬éãããŠããŸãã人éã«ããè©äŸ¡ã§ã¯ãåœç€Ÿã®æè¯ã®ã¢ãã«ãæ¢åã®ã¢ãããŒããããåªããŠããããšããã«ãã¿ãŒã³ã§ç€ºãããŠããŸã
é
åãšäººéæ§ã®æž¬å®ãšãã芳ç¹ããã®å¯Ÿè©±ã次ã«ãåæã«ãã£ãŠãã®äœæ¥ã®éçã«ã€ããŠèª¬æããŸãã
åŒç€Ÿæ©çš®ã®æ
éäºäŸ*
ãããïŒ
- Blenderbot ã¯çµ¶å¯Ÿäœçœ®åã蟌ã¿ãåããã¢ãã«ã§ãããããéåžžã¯å
¥åãå³åŽã«ããã£ã³ã°ããããšããå§ãããŸãã
å·Šã
ãã®ã¢ãã«ã¯ [sshleifer](https://huggingface.co/sshleifer) ã«ãã£ãŠæäŸãããŸãããèè
ã®ã³ãŒã㯠[ãã](https://github.com/facebookresearch/ParlAI) ã«ãããŸãã
## Implementation Notes
- Blenderbot ã¯ãæšæºã® [seq2seq ã¢ãã« ãã©ã³ã¹ãã©ãŒããŒ](https://arxiv.org/pdf/1706.03762.pdf) ããŒã¹ã®ã¢ãŒããã¯ãã£ã䜿çšããŸãã
- å©çšå¯èœãªãã§ãã¯ãã€ã³ãã¯ã[ã¢ãã« ãã](https://huggingface.co/models?search=blenderbot) ã§èŠã€ããããšãã§ããŸãã
- ãã㯠*ããã©ã«ã* Blenderbot ã¢ãã« ã¯ã©ã¹ã§ãããã ããæ¬¡ã®ãããªå°ããªãã§ãã¯ãã€ã³ããããã€ããããŸãã
`facebook/blenderbot_small_90M` ã¯ã¢ãŒããã¯ãã£ãç°ãªããããäžç·ã«äœ¿çšããå¿
èŠããããŸãã
[BlenderbotSmall](ãã¬ã³ããŒãããå°)ã
## Usage
ã¢ãã«ã®äœ¿çšäŸã次ã«ç€ºããŸãã
```python
>>> from transformers import BlenderbotTokenizer, BlenderbotForConditionalGeneration
>>> mname = "facebook/blenderbot-400M-distill"
>>> model = BlenderbotForConditionalGeneration.from_pretrained(mname)
>>> tokenizer = BlenderbotTokenizer.from_pretrained(mname)
>>> UTTERANCE = "My friends are cool but they eat too many carbs."
>>> inputs = tokenizer([UTTERANCE], return_tensors="pt")
>>> reply_ids = model.generate(**inputs)
>>> print(tokenizer.batch_decode(reply_ids))
["<s> That's unfortunate. Are they trying to lose weight or are they just trying to be healthier?</s>"]
```
## Documentation resources
- [å æèšèªã¢ããªã³ã° ã¿ã¹ã¯ ã¬ã€ã](../tasks/language_modeling)
- [翻蚳ã¿ã¹ã¯ã¬ã€ã](../tasks/translation)
- [èŠçŽã¿ã¹ã¯ã¬ã€ã](../tasks/summarization)
## BlenderbotConfig
[[autodoc]] BlenderbotConfig
## BlenderbotTokenizer
[[autodoc]] BlenderbotTokenizer
- build_inputs_with_special_tokens
## BlenderbotTokenizerFast
[[autodoc]] BlenderbotTokenizerFast
- build_inputs_with_special_tokens
## BlenderbotModel
*forward* ããã³ *generate* ã®åŒæ°ã«ã€ããŠã¯ã`transformers.BartModel`ãåç
§ããŠãã ããã
[[autodoc]] BlenderbotModel
- forward
## BlenderbotForConditionalGeneration
*forward* ãš *generate* ã®åŒæ°ã«ã€ããŠã¯ã[`~transformers.BartForConditionalGeneration`] ãåç
§ããŠãã ããã
[[autodoc]] BlenderbotForConditionalGeneration
- forward
## BlenderbotForCausalLM
[[autodoc]] BlenderbotForCausalLM
- forward
## TFBlenderbotModel
[[autodoc]] TFBlenderbotModel
- call
## TFBlenderbotForConditionalGeneration
[[autodoc]] TFBlenderbotForConditionalGeneration
- call
## FlaxBlenderbotModel
[[autodoc]] FlaxBlenderbotModel
- __call__
- encode
- decode
## FlaxBlenderbotForConditionalGeneration
[[autodoc]] FlaxBlenderbotForConditionalGeneration
- __call__
- encode
- decode
| 0 |
hf_public_repos/transformers/docs/source/ja | hf_public_repos/transformers/docs/source/ja/model_doc/canine.md | <!--Copyright 2021 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# CANINE
## Overview
CANINE ã¢ãã«ã¯ã[CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language
Representation](https://arxiv.org/abs/2103.06874)ãJonathan H. ClarkãDan GarretteãIulia TurcãJohn Wieting èããã®
æç€ºçãªããŒã¯ã³åã¹ããã (ãã€ã ãã¢ãªã©) ã䜿çšããã« Transformer ããã¬ãŒãã³ã°ããæåã®è«æã® 1 ã€
ãšã³ã³ãŒãã£ã³ã° (BPEãWordPiece ãŸã㯠SentencePiece)ã代ããã«ãã¢ãã«ã¯ Unicode æåã¬ãã«ã§çŽæ¥ãã¬ãŒãã³ã°ãããŸãã
ãã£ã©ã¯ã¿ãŒã¬ãã«ã§ã®ãã¬ãŒãã³ã°ã§ã¯å¿
ç¶çã«ã·ãŒã±ã³ã¹ã®é·ããé·ããªããŸãããCANINE ã¯ãããå¹ççãªæ¹æ³ã§è§£æ±ºããŸãã
ãã£ãŒã Transformer ãšã³ã³ãŒããé©çšããåã«ãããŠã³ãµã³ããªã³ã°æŠç¥ãå®è¡ããŸãã
è«æã®èŠçŽã¯æ¬¡ã®ãšããã§ãã
*ãã€ãã©ã€ã³ NLP ã·ã¹ãã ã¯ããšã³ãããŒãšã³ãã®ãã¥ãŒã©ã« ã¢ããªã³ã°ã«å€§éšåãåã£ãŠä»£ããããŠããŸãããäžè¬çã«äœ¿çšãããŠããã»ãŒãã¹ãŠã®ã¢ãã«ã¯
äŸç¶ãšããŠæç€ºçãªããŒã¯ã³åæé ãå¿
èŠã§ããæè¿ã®ããŒã¯ã³åã¢ãããŒãã¯ããŒã¿ç±æ¥ã®ãµãã¯ãŒãã«åºã¥ããŠããŸããã
ã¬ãã·ã³ã³ã¯æåã§äœæãããããŒã¯ãã€ã¶ãŒãããè匱ã§ã¯ãããŸãããããããã®æè¡ã¯ãã¹ãŠã®èšèªã«çããé©ããŠããããã§ã¯ãããŸããã
èšèªãåºå®èªåœã®äœ¿çšã«ãããã¢ãã«ã®é©å¿èœåãå¶éãããå¯èœæ§ããããŸãããã®è«æã§ã¯ãCANINE ã玹ä»ããŸãã
æç€ºçãªããŒã¯ã³åãèªåœã䜿çšããã«ãæåã·ãŒã±ã³ã¹ãçŽæ¥æäœãããã¥ãŒã©ã« ãšã³ã³ãŒããŒãšã
æåã«çŽæ¥äœçšãããããªãã·ã§ã³ã§ãµãã¯ãŒãããœããèªå°ãã€ã¢ã¹ãšããŠäœ¿çšããäºåãã¬ãŒãã³ã°æŠç¥ã
ããããã®çްããå
¥åã广çãã€å¹ççã«äœ¿çšããããã«ãCANINE ã¯ããŠã³ãµã³ããªã³ã°ãçµã¿åãããŠãå
¥åãåæžããŸãã
ã³ã³ããã¹ãããšã³ã³ãŒããããã£ãŒããã©ã³ã¹ãã©ãŒããŒã¹ã¿ãã¯ãåããã·ãŒã±ã³ã¹ã®é·ãã CANINE ã¯ãåçã® mBERT ã¢ãã«ãããæ¬¡ã®ç¹ã§åªããŠããŸãã
TyDi QA ã® 2.8 F1 ã¯ãã¢ãã« ãã©ã¡ãŒã¿ã 28% å°ãªãã«ãããããããå°é£ãªå€èšèªãã³ãããŒã¯ã§ãã*
ãã®ã¢ãã«ã¯ã[nielsr](https://huggingface.co/nielsr) ã«ãã£ãŠæäŸãããŸãããå
ã®ã³ãŒã㯠[ãã](https://github.com/google-research/language/tree/master/language/canine) ã«ãããŸãã
## Usage tips
- CANINE ã¯å
éšã§å°ãªããšã 3 ã€ã® Transformer ãšã³ã³ãŒããŒã䜿çšããŸã: 2 ã€ã®ãæµ
ãããšã³ã³ãŒã㌠(åäžã®ãšã³ã³ãŒããŒã®ã¿ã§æ§æ)
ã¬ã€ã€ãŒ) ãš 1 ã€ã®ããã£ãŒãããšã³ã³ãŒã㌠(éåžžã® BERT ãšã³ã³ãŒããŒ)ããŸãããæµ
ãããšã³ã³ãŒãã䜿çšããŠã³ã³ããã¹ããèšå®ããŸãã
ããŒã«ã« ã¢ãã³ã·ã§ã³ã䜿çšããæåã®åã蟌ã¿ã次ã«ãããŠã³ãµã³ããªã³ã°ã®åŸãããã£ãŒãããšã³ã³ãŒããŒãé©çšãããŸããã€ãã«ã
ã¢ãããµã³ããªã³ã°åŸããæµ
ãããšã³ã³ãŒãã䜿çšããŠæçµçãªæååã蟌ã¿ãäœæãããŸããã¢ãããš
ããŠã³ãµã³ããªã³ã°ã«ã€ããŠã¯è«æã«èšèŒãããŠããŸãã
- CANINE ã¯ãããã©ã«ãã§ 2048 æåã®æå€§ã·ãŒã±ã³ã¹é·ã䜿çšããŸãã [`CanineTokenizer`] ã䜿çšã§ããŸã
ã¢ãã«çšã®ããã¹ããæºåããŸãã
- ç¹å¥ãª [CLS] ããŒã¯ã³ã®æçµçãªéè¡šç€ºç¶æ
ã®äžã«ç·åœ¢ã¬ã€ã€ãŒãé
眮ããããšã§åé¡ãè¡ãããšãã§ããŸãã
(äºåå®çŸ©ããã Unicode ã³ãŒã ãã€ã³ãããããŸã)ããã ããããŒã¯ã³åé¡ã¿ã¹ã¯ã®å Žåã¯ãããŠã³ãµã³ããªã³ã°ãããã·ãŒã±ã³ã¹
ããŒã¯ã³ã¯ãå
ã®æåã·ãŒã±ã³ã¹ã®é·ã (2048) ãšäžèŽããããã«å床ã¢ãããµã³ããªã³ã°ããå¿
èŠããããŸããã®
詳现ã«ã€ããŠã¯ãè«æãåç
§ããŠãã ããã
ã¢ãã«ã®ãã§ãã¯ãã€ã³ã:
- [google/canine-c](https://huggingface.co/google/canine-c): èªå·±ååž°æåæå€±ã§äºåãã¬ãŒãã³ã°æžã¿ã
12 ã¬ã€ã€ãŒã768 é ãã12 ãããã121M ãã©ã¡ãŒã¿ãŒ (ãµã€ãº ~500 MB)ã
- [google/canine-s](https://huggingface.co/google/canine-s): ãµãã¯ãŒãæå€±ã§äºåãã¬ãŒãã³ã°æžã¿ã12 å±€ã
768 åã®é衚瀺ã12 ãããã121M ãã©ã¡ãŒã¿ãŒ (ãµã€ãº ~500 MB)ã
## Usage example
CANINE ã¯çã®æåã§åäœããããã**ããŒã¯ãã€ã¶ãŒãªã**ã§äœ¿çšã§ããŸãã
```python
>>> from transformers import CanineModel
>>> import torch
>>> model = CanineModel.from_pretrained("google/canine-c") # model pre-trained with autoregressive character loss
>>> text = "hello world"
>>> # use Python's built-in ord() function to turn each character into its unicode code point id
>>> input_ids = torch.tensor([[ord(char) for char in text]])
>>> outputs = model(input_ids) # forward pass
>>> pooled_output = outputs.pooler_output
>>> sequence_output = outputs.last_hidden_state
```
ãã ãããããæšè«ãšãã¬ãŒãã³ã°ã®å Žåã¯ãããŒã¯ãã€ã¶ãŒã䜿çšããããšããå§ãããŸãïŒãã¹ãŠãããã£ã³ã°/åãè©°ããããïŒ
ã·ãŒã±ã³ã¹ãåãé·ãã«ããŸã):
```python
>>> from transformers import CanineTokenizer, CanineModel
>>> model = CanineModel.from_pretrained("google/canine-c")
>>> tokenizer = CanineTokenizer.from_pretrained("google/canine-c")
>>> inputs = ["Life is like a box of chocolates.", "You never know what you gonna get."]
>>> encoding = tokenizer(inputs, padding="longest", truncation=True, return_tensors="pt")
>>> outputs = model(**encoding) # forward pass
>>> pooled_output = outputs.pooler_output
>>> sequence_output = outputs.last_hidden_state
```
## Resources
- [ããã¹ãåé¡ã¿ã¹ã¯ã¬ã€ã](../tasks/sequence_classification)
- [ããŒã¯ã³åé¡ã¿ã¹ã¯ã¬ã€ã](../tasks/token_classification)
- [質ååçã¿ã¹ã¯ ã¬ã€ã](../tasks/question_answering)
- [å€è¢éžæã¿ã¹ã¯ ã¬ã€ã](../tasks/multiple_choice)
## CanineConfig
[[autodoc]] CanineConfig
## CanineTokenizer
[[autodoc]] CanineTokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
## CANINE specific outputs
[[autodoc]] models.canine.modeling_canine.CanineModelOutputWithPooling
## CanineModel
[[autodoc]] CanineModel
- forward
## CanineForSequenceClassification
[[autodoc]] CanineForSequenceClassification
- forward
## CanineForMultipleChoice
[[autodoc]] CanineForMultipleChoice
- forward
## CanineForTokenClassification
[[autodoc]] CanineForTokenClassification
- forward
## CanineForQuestionAnswering
[[autodoc]] CanineForQuestionAnswering
- forward
| 0 |
hf_public_repos/transformers/docs/source/ja | hf_public_repos/transformers/docs/source/ja/model_doc/byt5.md | <!--Copyright 2021 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# ByT5
## Overview
ByT5 ã¢ãã«ã¯ã[ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626) by Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir
Kale, Adam Roberts, Colin Raffel.
è«æã®èŠçŽã¯æ¬¡ã®ãšããã§ãã
*æãåºã䜿çšãããŠããäºåãã¬ãŒãã³ã°æžã¿èšèªã¢ãã«ã¯ãåèªãŸãã¯ãµãã¯ãŒãåäœã«å¯Ÿå¿ããããŒã¯ã³ã®ã·ãŒã±ã³ã¹ã§åäœããŸãã
ããã¹ããããŒã¯ã³ã®ã·ãŒã±ã³ã¹ãšããŠãšã³ã³ãŒãããã«ã¯ãããŒã¯ãã€ã¶ãŒãå¿
èŠã§ããããŒã¯ãã€ã¶ãŒã¯éåžžã
ã¢ãã«ã代ããã«çã®ããã¹ã (ãã€ããŸãã¯æå) ãçŽæ¥æäœããããŒã¯ã³ããªãŒ ã¢ãã«ã«ã¯å€ãã®å©ç¹ããããŸãã
ããã«äœ¿çšã§ããããããèšèªã®ããã¹ããåŠçã§ãããã€ãºã«å¯ŸããŠããå
ç¢ã§ãããæè¡çè² åµãæå°éã«æããŸãã
è€éã§ãšã©ãŒãçºçããããããã¹ãååŠçãã€ãã©ã€ã³ãåé€ããŸãããã€ããŸãã¯æååãããŒã¯ã³ããé·ããã
ããŒã¯ã³ããªãŒ ã¢ãã«ã«é¢ããéå»ã®ç ç©¶ã§ã¯ãã·ãŒã±ã³ã¹ã®ã³ã¹ããååŽããããã«èšèšãããæ°ããã¢ãã« ã¢ãŒããã¯ãã£ãå°å
¥ãããããšããããããŸããã
çã®ããã¹ããçŽæ¥æäœããŸãããã®è«æã§ã¯ãæšæºç㪠Transformer ã¢ãŒããã¯ãã£ã次ã®ãããªãã®ã§äœ¿çšã§ããããšã瀺ããŸãã
ãã€ãã·ãŒã±ã³ã¹ãåŠçããããã®æå°éã®å€æŽããã©ã¡ãŒã¿æ°ã®èгç¹ãããã¬ãŒããªããæ³šææ·±ãç¹åŸŽä»ããŸãã
FLOP ã®ãã¬ãŒãã³ã°ãšæšè«é床ã調ã¹ããã€ãã¬ãã«ã®ã¢ãã«ãããŒã¯ã³ã¬ãã«ãšç«¶åã§ããããšã瀺ããŸãã
察å¿è
ããŸãããã€ãã¬ãã«ã®ã¢ãã«ã¯ãã€ãºã«å¯ŸããŠå€§å¹
ã«å
ç¢ã§ãããããåªããããã©ãŒãã³ã¹ãçºæ®ããããšã瀺ããŠããŸãã
ã¹ãã«ãšçºé³ã«ææãªã¿ã¹ã¯ãç§ãã¡ã®è²¢ç®ã®äžç°ãšããŠãæ°ããã»ããããªãªãŒã¹ããŸãã
T5 ã¢ãŒããã¯ãã£ã«åºã¥ããäºåãã¬ãŒãã³ã°æžã¿ã®ãã€ãã¬ãã«ã® Transformer ã¢ãã«ãšãããã§äœ¿çšããããã¹ãŠã®ã³ãŒããšããŒã¿
å®éšã*
ãã®ã¢ãã«ã¯ã[patrickvonplaten](https://huggingface.co/patrickvonplaten) ã«ãã£ãŠæäŸãããŸãããå
ã®ã³ãŒãã¯æ¬¡ã®ãšããã§ã
[ãã](https://github.com/google-research/byt5) ã«ãããŸãã
<Tip>
ByT5 ã®ã¢ãŒããã¯ãã£ã¯ T5v1.1 ã¢ãã«ã«åºã¥ããŠããŸããAPI ãªãã¡ã¬ã³ã¹ã«ã€ããŠã¯ã[T5v1.1 ã®ããã¥ã¡ã³ã ããŒãž](t5v1.1) ãåç
§ããŠãã ããã圌ãã¯
ã¢ãã«ã®å
¥åãæºåããæ¹æ³ãç°ãªãã ãã§ãã以äžã®ã³ãŒãäŸãåç
§ããŠãã ããã
</Tip>
ByT5 ã¯æåž«ãªãã§äºåãã¬ãŒãã³ã°ãããŠãããããåäžã¿ã¹ã¯äžã«ã¿ã¹ã¯ ãã¬ãã£ãã¯ã¹ã䜿çšããå©ç¹ã¯ãããŸããã
埮調æŽããã«ãã¿ã¹ã¯ã®åŸ®èª¿æŽãè¡ãå Žåã¯ããã¬ãã£ãã¯ã¹ã䜿çšããå¿
èŠããããŸãã
## Usage Examples
ByT5 ã¯çã® UTF-8 ãã€ãã§åäœãããããããŒã¯ãã€ã¶ãŒãªãã§äœ¿çšã§ããŸãã
```python
>>> from transformers import T5ForConditionalGeneration
>>> import torch
>>> model = T5ForConditionalGeneration.from_pretrained("google/byt5-small")
>>> num_special_tokens = 3
>>> # Model has 3 special tokens which take up the input ids 0,1,2 of ByT5.
>>> # => Need to shift utf-8 character encodings by 3 before passing ids to model.
>>> input_ids = torch.tensor([list("Life is like a box of chocolates.".encode("utf-8"))]) + num_special_tokens
>>> labels = torch.tensor([list("La vie est comme une boîte de chocolat.".encode("utf-8"))]) + num_special_tokens
>>> loss = model(input_ids, labels=labels).loss
>>> loss.item()
2.66
```
ãã ãããããæšè«ãšãã¬ãŒãã³ã°ã®å Žåã¯ãããŒã¯ãã€ã¶ãŒã䜿çšããããšããå§ãããŸãã
```python
>>> from transformers import T5ForConditionalGeneration, AutoTokenizer
>>> model = T5ForConditionalGeneration.from_pretrained("google/byt5-small")
>>> tokenizer = AutoTokenizer.from_pretrained("google/byt5-small")
>>> model_inputs = tokenizer(
... ["Life is like a box of chocolates.", "Today is Monday."], padding="longest", return_tensors="pt"
... )
>>> labels_dict = tokenizer(
... ["La vie est comme une boîte de chocolat.", "Aujourd'hui c'est lundi."], padding="longest", return_tensors="pt"
... )
>>> labels = labels_dict.input_ids
>>> loss = model(**model_inputs, labels=labels).loss
>>> loss.item()
17.9
```
[T5](t5) ãšåæ§ã«ãByT5 ã¯ã¹ãã³ãã¹ã¯ãã€ãºé€å»ã¿ã¹ã¯ã§ãã¬ãŒãã³ã°ãããŸããããããã
ã¢ãã«ã¯ãã£ã©ã¯ã¿ãŒã«çŽæ¥äœçšãããããäºåãã¬ãŒãã³ã°ã¿ã¹ã¯ã¯å°ãè€éã§ã
éããã®ããã€ãã®æåãç ŽæããŠã¿ãŸããã
`"The dog chases a ball in the park."`ãšããæãå
¥åããByT5 ã«äºæž¬ããŠããããŸãã
ããããã¡ã®ããã
```python
>>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
>>> import torch
>>> tokenizer = AutoTokenizer.from_pretrained("google/byt5-base")
>>> model = AutoModelForSeq2SeqLM.from_pretrained("google/byt5-base")
>>> input_ids_prompt = "The dog chases a ball in the park."
>>> input_ids = tokenizer(input_ids_prompt).input_ids
>>> # Note that we cannot add "{extra_id_...}" to the string directly
>>> # as the Byte tokenizer would incorrectly merge the tokens
>>> # For ByT5, we need to work directly on the character level
>>> # Contrary to T5, ByT5 does not use sentinel tokens for masking, but instead
>>> # uses final utf character ids.
>>> # UTF-8 is represented by 8 bits and ByT5 has 3 special tokens.
>>> # => There are 2**8+2 = 259 input ids and mask tokens count down from index 258.
>>> # => mask to "The dog [258]a ball [257]park."
>>> input_ids = torch.tensor([input_ids[:8] + [258] + input_ids[14:21] + [257] + input_ids[28:]])
>>> input_ids
tensor([[ 87, 107, 104, 35, 103, 114, 106, 35, 258, 35, 100, 35, 101, 100, 111, 111, 257, 35, 115, 100, 117, 110, 49, 1]])
>>> # ByT5 produces only one char at a time so we need to produce many more output characters here -> set `max_length=100`.
>>> output_ids = model.generate(input_ids, max_length=100)[0].tolist()
>>> output_ids
[0, 258, 108, 118, 35, 119, 107, 104, 35, 114, 113, 104, 35, 122, 107, 114, 35, 103, 114, 104, 118, 257, 35, 108, 113, 35, 119, 107, 104, 35, 103, 108, 118, 102, 114, 256, 108, 113, 35, 119, 107, 104, 35, 115, 100, 117, 110, 49, 35, 87, 107, 104, 35, 103, 114, 106, 35, 108, 118, 35, 119, 107, 104, 35, 114, 113, 104, 35, 122, 107, 114, 35, 103, 114, 104, 118, 35, 100, 35, 101, 100, 111, 111, 35, 108, 113, 255, 35, 108, 113, 35, 119, 107, 104, 35, 115, 100, 117, 110, 49]
>>> # ^- Note how 258 descends to 257, 256, 255
>>> # Now we need to split on the sentinel tokens, let's write a short loop for this
>>> output_ids_list = []
>>> start_token = 0
>>> sentinel_token = 258
>>> while sentinel_token in output_ids:
... split_idx = output_ids.index(sentinel_token)
... output_ids_list.append(output_ids[start_token:split_idx])
... start_token = split_idx
... sentinel_token -= 1
>>> output_ids_list.append(output_ids[start_token:])
>>> output_string = tokenizer.batch_decode(output_ids_list)
>>> output_string
['<pad>', 'is the one who does', ' in the disco', 'in the park. The dog is the one who does a ball in', ' in the park.']
```
## ByT5Tokenizer
[[autodoc]] ByT5Tokenizer
詳现ã«ã€ããŠã¯ã[`ByT5Tokenizer`] ãåç
§ããŠãã ããã | 0 |
hf_public_repos/transformers/docs/source/ja | hf_public_repos/transformers/docs/source/ja/model_doc/clap.md | <!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# CLAP
## Overview
CLAP ã¢ãã«ã¯ã[Large Scale Contrastive Language-Audio pretraining with
feature fusion and keyword-to-caption augmentation](https://arxiv.org/pdf/2211.06687.pdf)ãYusong WuãKe ChenãTianyu ZhangãYuchen HuiãTaylor Berg-KirkpatrickãShlomo Dubnov èã
CLAP (Contrastive Language-Audio Pretraining) ã¯ãããŸããŸãª (é³å£°ãããã¹ã) ãã¢ã§ãã¬ãŒãã³ã°ããããã¥ãŒã©ã« ãããã¯ãŒã¯ã§ããã¿ã¹ã¯ã«åãããŠçŽæ¥æé©åããããšãªããé³å£°ãäžããããå Žåã«æãé¢é£æ§ã®é«ãããã¹ã ã¹ãããããäºæž¬ããããã«æç€ºã§ããŸãã CLAP ã¢ãã«ã¯ãSWINTransformer ã䜿çšã㊠log-Mel ã¹ãã¯ããã°ã©ã å
¥åãããªãŒãã£ãªç¹åŸŽãååŸããRoBERTa ã¢ãã«ã䜿çšããŠããã¹ãç¹åŸŽãååŸããŸããæ¬¡ã«ãããã¹ããšãªãŒãã£ãªã®äž¡æ¹ã®ç¹åŸŽããåãæ¬¡å
ã®æœåšç©ºéã«æåœ±ãããŸããæåœ±ããããªãŒãã£ãªãšããã¹ãã®ç¹åŸŽã®éã®ãããç©ããåæ§ã®ã¹ã³ã¢ãšããŠäœ¿çšãããŸãã
è«æã®èŠçŽã¯æ¬¡ã®ãšããã§ãã
*察ç
§åŠç¿ã¯ããã«ãã¢ãŒãã«è¡šçŸåŠç¿ã®åéã§ç®èŠãŸããæåãåããŠããŸãããã®è«æã§ã¯ãé³å£°ããŒã¿ãšèªç¶èšèªèšè¿°ãçµã¿åãããŠé³å£°è¡šçŸãéçºããã察ç
§çãªèšèªé³å£°äºåãã¬ãŒãã³ã°ã®ãã€ãã©ã€ã³ãææ¡ããŸãããã®ç®æšãéæããããã«ãç§ãã¡ã¯ãŸããããŸããŸãªããŒã¿ ãœãŒã¹ããã® 633,526 åã®é³å£°ãšããã¹ãã®ãã¢ã®å€§èŠæš¡ãªã³ã¬ã¯ã·ã§ã³ã§ãã LAION-Audio-630K ããªãªãŒã¹ããŸããæ¬¡ã«ãããŸããŸãªãªãŒãã£ãª ãšã³ã³ãŒããšããã¹ã ãšã³ã³ãŒããèæ
®ããŠã察ç
§çãªèšèªãšãªãŒãã£ãªã®äºåãã¬ãŒãã³ã° ã¢ãã«ãæ§ç¯ããŸããæ©èœèåã¡ã«ããºã ãšããŒã¯ãŒããããã£ãã·ã§ã³ãžã®æ¡åŒµãã¢ãã«èšèšã«çµã¿èŸŒãã§ãã¢ãã«ãå¯å€é·ã®é³å£°å
¥åãåŠçã§ããããã«ããããã©ãŒãã³ã¹ãåäžãããŸãã 3 çªç®ã«ãå
æ¬çãªå®éšãå®è¡ããŠãããã¹ãããé³å£°ãžã®ååŸããŒãã·ã§ããé³å£°åé¡ãæåž«ä»ãé³å£°åé¡ã® 3 ã€ã®ã¿ã¹ã¯ã«ããã£ãŠã¢ãã«ãè©äŸ¡ããŸããçµæã¯ãç§ãã¡ã®ã¢ãã«ãããã¹ãããé³å£°ãžã®æ€çŽ¢ã¿ã¹ã¯ã«ãããŠåªããããã©ãŒãã³ã¹ãéæããŠããããšã瀺ããŠããŸãããªãŒãã£ãªåé¡ã¿ã¹ã¯ã§ã¯ãã¢ãã«ã¯ãŒãã·ã§ããèšå®ã§æå
端ã®ããã©ãŒãã³ã¹ãéæããéãŒãã·ã§ããèšå®ã§ãã¢ãã«ã®çµæã«å¹æµããããã©ãŒãã³ã¹ãåŸãããšãã§ããŸãã LAION-ãªãŒãã£ãª-6*
ãã®ã¢ãã«ã¯ã[Younes Belkada](https://huggingface.co/ybelkada) ããã³ [Arthur Zucker](https://huggingface.co/ArthurZ) ã«ãã£ãŠæäŸãããŸããã
å
ã®ã³ãŒã㯠[ãã¡ã](https://github.com/LAION-AI/Clap) ã«ãããŸãã
## ClapConfig
[[autodoc]] ClapConfig
- from_text_audio_configs
## ClapTextConfig
[[autodoc]] ClapTextConfig
## ClapAudioConfig
[[autodoc]] ClapAudioConfig
## ClapFeatureExtractor
[[autodoc]] ClapFeatureExtractor
## ClapProcessor
[[autodoc]] ClapProcessor
## ClapModel
[[autodoc]] ClapModel
- forward
- get_text_features
- get_audio_features
## ClapTextModel
[[autodoc]] ClapTextModel
- forward
## ClapTextModelWithProjection
[[autodoc]] ClapTextModelWithProjection
- forward
## ClapAudioModel
[[autodoc]] ClapAudioModel
- forward
## ClapAudioModelWithProjection
[[autodoc]] ClapAudioModelWithProjection
- forward
| 0 |
hf_public_repos/transformers/docs/source/ja | hf_public_repos/transformers/docs/source/ja/model_doc/clvp.md | <!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# CLVP
## Overview
CLVP (Contrastive Language-Voice Pretrained Transformer) ã¢ãã«ã¯ãJames Betker ã«ãã£ãŠ [Better speech synthesis through scaling](https://arxiv.org/abs/2305.07243) ã§ææ¡ãããŸããã
è«æã®èŠçŽã¯æ¬¡ã®ãšããã§ãã
*è¿å¹Žãç»åçæã®åéã¯èªå·±ååž°å€æåšãš DDPM ã®å¿çšã«ãã£ãŠé©åœãèµ·ãããŠããŸãããããã®ã¢ãããŒãã¯ãç»åçæã®ããã»ã¹ã段éçãªç¢ºççããã»ã¹ãšããŠã¢ãã«åãã倧éã®ã³ã³ãã¥ãŒãã£ã³ã°ãšããŒã¿ã掻çšããŠç»åã®ååžãåŠç¿ããŸããããã©ãŒãã³ã¹ãåäžããããã®æ¹æ³è«ã¯ãç»åã«éå®ãããå¿
èŠã¯ãããŸããããã®è«æã§ã¯ãç»åçæãã¡ã€ã³ã®é²æ©ãé³å£°åæã«é©çšããæ¹æ³ã«ã€ããŠèª¬æããŸãããã®çµæã衚çŸåè±ããªãã«ãé³å£°ããã¹ãèªã¿äžãã·ã¹ãã ã§ãã TorToise ãèªçããŸããã
ãã®ã¢ãã«ã¯ [Susnato Dhar](https://huggingface.co/susnato) ã«ãã£ãŠæäŸãããŸããã
å
ã®ã³ãŒã㯠[ãã](https://github.com/neonbjb/tortoise-tts) ã«ãããŸãã
## Usage tips
1. CLVP 㯠Tortoise TTS ã¢ãã«ã®äžå¯æ¬ ãªéšåã§ãã
2. CLVP ã䜿çšããŠãçæãããããŸããŸãªé³å£°åè£ãæäŸãããããã¹ããšæ¯èŒããããšãã§ããæè¯ã®é³å£°ããŒã¯ã³ãæ¡æ£ã¢ãã«ã«è»¢éãããŸãã
3. Tortoise ã®äœ¿çšã«ã¯ã[`ClvpModelForConditionalGeneration.generate()`] ã¡ãœããã®äœ¿çšã匷ããå§ãããŸãã
4. 16 kHz ãæåŸ
ããä»ã®ãªãŒãã£ãª ã¢ãã«ãšã¯å¯Ÿç
§çã«ãCLVP ã¢ãã«ã¯ãªãŒãã£ãªã 22.05 kHz ã§ãµã³ããªã³ã°ãããããšãæåŸ
ããŠããããšã«æ³šæããŠãã ããã
## Brief Explanation:
- [`ClvpTokenizer`] ã¯ããã¹ãå
¥åãããŒã¯ã³åãã[`ClvpFeatureExtractor`] ã¯ç®çã®ãªãŒãã£ãªãããã° ã¡ã« ã¹ãã¯ããã°ã©ã ãæœåºããŸãã
- [`ClvpConditioningEncoder`] ã¯ããããã®ããã¹ã ããŒã¯ã³ãšãªãŒãã£ãªè¡šçŸãååŸããããã¹ããšãªãŒãã£ãªã«åºã¥ããŠæ¡ä»¶ä»ããããåã蟌ã¿ã«å€æããŸãã
- [`ClvpForCausalLM`] ã¯ããããã®åã蟌ã¿ã䜿çšããŠè€æ°ã®é³å£°åè£ãçæããŸãã
- åé³å£°åè£ã¯é³å£°ãšã³ã³ãŒã ([`ClvpEncoder`]) ãééããŠãã¯ãã«è¡šçŸã«å€æãããããã¹ã ãšã³ã³ãŒã ([`ClvpEncoder`]) ã¯ããã¹ã ããŒã¯ã³ãåãæœåšç©ºéã«å€æããŸãã
- æåŸã«ãåé³å£°ãã¯ãã«ãããã¹ã ãã¯ãã«ãšæ¯èŒããŠãã©ã®é³å£°ãã¯ãã«ãããã¹ã ãã¯ãã«ã«æãé¡äŒŒããŠãããã確èªããŸãã
- [`ClvpModelForConditionalGeneration.generate()`] ã¯ãäžèšã®ãã¹ãŠã®ããžãã¯ã 1 ã€ã®ã¡ãœããã«å§çž®ããŸãã
äŸ ïŒ
```python
>>> import datasets
>>> from transformers import ClvpProcessor, ClvpModelForConditionalGeneration
>>> # Define the Text and Load the Audio (We are taking an audio example from HuggingFace Hub using `datasets` library).
>>> text = "This is an example text."
>>> ds = datasets.load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
>>> ds = ds.cast_column("audio", datasets.Audio(sampling_rate=22050))
>>> sample = ds[0]["audio"]
>>> # Define processor and model.
>>> processor = ClvpProcessor.from_pretrained("susnato/clvp_dev")
>>> model = ClvpModelForConditionalGeneration.from_pretrained("susnato/clvp_dev")
>>> # Generate processor output and model output.
>>> processor_output = processor(raw_speech=sample["array"], sampling_rate=sample["sampling_rate"], text=text, return_tensors="pt")
>>> generated_output = model.generate(**processor_output)
```
## ClvpConfig
[[autodoc]] ClvpConfig
- from_sub_model_configs
## ClvpEncoderConfig
[[autodoc]] ClvpEncoderConfig
## ClvpDecoderConfig
[[autodoc]] ClvpDecoderConfig
## ClvpTokenizer
[[autodoc]] ClvpTokenizer
- save_vocabulary
## ClvpFeatureExtractor
[[autodoc]] ClvpFeatureExtractor
- __call__
## ClvpProcessor
[[autodoc]] ClvpProcessor
- __call__
- decode
- batch_decode
## ClvpModelForConditionalGeneration
[[autodoc]] ClvpModelForConditionalGeneration
- forward
- generate
- get_text_features
- get_speech_features
## ClvpForCausalLM
[[autodoc]] ClvpForCausalLM
## ClvpModel
[[autodoc]] ClvpModel
## ClvpEncoder
[[autodoc]] ClvpEncoder
## ClvpDecoder
[[autodoc]] ClvpDecoder
| 0 |
hf_public_repos/transformers/docs/source/ja | hf_public_repos/transformers/docs/source/ja/model_doc/camembert.md | <!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# CamemBERT
## Overview
CamemBERT ã¢ãã«ã¯ã[CamemBERT: a Tasty French Language Model](https://arxiv.org/abs/1911.03894) ã§ææ¡ãããŸããã
Louis Martin, Benjamin Muller, Pedro Javier Ortiz Suárez, Yoann Dupont, Laurent Romary, Ãric Villemonte de la
Clergerie, Djamé Seddah, and Benoît Sagot. 2019幎ã«ãªãªãŒã¹ãããFacebookã®RoBERTaã¢ãã«ãããŒã¹ã«ããã¢ãã«ã§ãã
138GBã®ãã©ã³ã¹èªããã¹ãã§ãã¬ãŒãã³ã°ãããŸããã
è«æã®èŠçŽã¯æ¬¡ã®ãšããã§ãã
*äºåãã¬ãŒãã³ã°ãããèšèªã¢ãã«ã¯çŸåšãèªç¶èšèªåŠçã§åºãæ®åããŠããŸããæåã«ãããããããå©çšå¯èœãªã»ãšãã©ã®
ã¢ãã«ã¯è±èªã®ããŒã¿ããŸãã¯è€æ°èšèªã®ããŒã¿ã®é£çµã§ãã¬ãŒãã³ã°ãããŠããŸããããã«ããã
ãã®ãããªã¢ãã«ã®å®éã®äœ¿çšã¯ãè±èªãé€ããã¹ãŠã®èšèªã§éåžžã«éãããŠããŸãããã©ã³ã¹äººã«ãšã£ãŠãã®åé¡ã«å¯ŸåŠããããšãç®æããŠã
Bi-direction Encoders for Transformers (BERT) ã®ãã©ã³ã¹èªçã§ãã CamemBERT ããªãªãŒã¹ããŸããæž¬å®ããŸã
è€æ°ã®äžæµã¿ã¹ã¯ãã€ãŸãåè©ã¿ã°ä»ãã«ãããå€èšèªã¢ãã«ãšæ¯èŒãã CamemBERT ã®ããã©ãŒãã³ã¹
äŸåé¢ä¿è§£æãåºæè¡šçŸèªèãèªç¶èšèªæšè«ã CamemBERT ã¯æå
端æè¡ãåäžãããŸã
æ€èšãããŠããã»ãšãã©ã®ã¿ã¹ã¯ã«å¯Ÿå¿ããŸããç§ãã¡ã¯ãç ç©¶ãš
ãã©ã³ã¹èª NLP ã®äžæµã¢ããªã±ãŒã·ã§ã³ã*
ãã®ã¢ãã«ã¯ [camembert](https://huggingface.co/camembert) ã«ãã£ãŠæäŸãããŸãããå
ã®ã³ãŒã㯠[ãã](https://camembert-model.fr/) ã«ãããŸãã
<Tip>
ãã®å®è£
ã¯RoBERTaãšåãã§ãã䜿çšäŸã«ã€ããŠã¯[RoBERTaã®ããã¥ã¡ã³ã](roberta)ãåç
§ããŠãã ããã
å
¥åãšåºåã«é¢ããæ
å ±ãšããŠã
</Tip>
## Resources
- [ããã¹ãåé¡ã¿ã¹ã¯ã¬ã€ã](../tasks/sequence_classification)
- [ããŒã¯ã³åé¡ã¿ã¹ã¯ã¬ã€ã](../tasks/token_classification)
- [質ååçã¿ã¹ã¯ ã¬ã€ã](../tasks/question_answering)
- [å æèšèªã¢ããªã³ã° ã¿ã¹ã¯ ã¬ã€ã](../tasks/language_modeling)
- [ãã¹ã¯èšèªã¢ããªã³ã° ã¿ã¹ã¯ ã¬ã€ã](../tasks/masked_language_modeling)
- [å€è¢éžæã¿ã¹ã¯ ã¬ã€ã](../tasks/multiple_choice)
## CamembertConfig
[[autodoc]] CamembertConfig
## CamembertTokenizer
[[autodoc]] CamembertTokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
## CamembertTokenizerFast
[[autodoc]] CamembertTokenizerFast
<frameworkcontent>
<pt>
## CamembertModel
[[autodoc]] CamembertModel
## CamembertForCausalLM
[[autodoc]] CamembertForCausalLM
## CamembertForMaskedLM
[[autodoc]] CamembertForMaskedLM
## CamembertForSequenceClassification
[[autodoc]] CamembertForSequenceClassification
## CamembertForMultipleChoice
[[autodoc]] CamembertForMultipleChoice
## CamembertForTokenClassification
[[autodoc]] CamembertForTokenClassification
## CamembertForQuestionAnswering
[[autodoc]] CamembertForQuestionAnswering
</pt>
<tf>
## TFCamembertModel
[[autodoc]] TFCamembertModel
## TFCamembertForCasualLM
[[autodoc]] TFCamembertForCausalLM
## TFCamembertForMaskedLM
[[autodoc]] TFCamembertForMaskedLM
## TFCamembertForSequenceClassification
[[autodoc]] TFCamembertForSequenceClassification
## TFCamembertForMultipleChoice
[[autodoc]] TFCamembertForMultipleChoice
## TFCamembertForTokenClassification
[[autodoc]] TFCamembertForTokenClassification
## TFCamembertForQuestionAnswering
[[autodoc]] TFCamembertForQuestionAnswering
</tf>
</frameworkcontent>
| 0 |
hf_public_repos/transformers/docs/source/ja | hf_public_repos/transformers/docs/source/ja/model_doc/convnextv2.md | <!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# ConvNeXt V2
## Overview
ConvNeXt V2 ã¢ãã«ã¯ãSanghyun WooãShobhik DebnathãRonghang HuãXinlei ChenãZhuang Liu, In So Kweon, Saining Xie. ã«ãã£ãŠ [ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders](https://arxiv.org/abs/2301.00808) ã§ææ¡ãããŸããã
ConvNeXt V2 ã¯ãVision Transformers ã®èšèšããã€ã³ã¹ãã¬ãŒã·ã§ã³ãåŸãçŽç²ãªç³ã¿èŸŒã¿ã¢ãã« (ConvNet) ã§ããã[ConvNeXT](convnext) ã®åŸç¶ã§ãã
è«æã®èŠçŽã¯æ¬¡ã®ãšããã§ãã
*ã¢ãŒããã¯ãã£ã®æ¹åãšè¡šçŸåŠç¿ãã¬ãŒã ã¯ãŒã¯ã®æ¹åã«ãããèŠèŠèªèã®åé㯠2020 幎代åé ã«æ¥éãªè¿ä»£åãšããã©ãŒãã³ã¹ã®åäžãå®çŸããŸãããããšãã°ãConvNeXt ã«ä»£è¡šãããææ°ã® ConvNet ã¯ãããŸããŸãªã·ããªãªã§åŒ·åãªããã©ãŒãã³ã¹ãå®èšŒããŠããŸãããããã®ã¢ãã«ã¯ããšããš ImageNet ã©ãã«ã䜿çšããæåž«ããåŠç¿çšã«èšèšãããŸãããããã¹ã¯ ãªãŒããšã³ã³ãŒã㌠(MAE) ãªã©ã®èªå·±æåž«ããåŠç¿ææ³ãããæœåšçã«æ©æµãåããããšãã§ããŸãããã ããããã 2 ã€ã®ã¢ãããŒããåçŽã«çµã¿åããããšãããã©ãŒãã³ã¹ãæšæºä»¥äžã«ãªãããšãããããŸããããã®è«æã§ã¯ãå®å
šç³ã¿èŸŒã¿ãã¹ã¯ ãªãŒããšã³ã³ãŒã ãã¬ãŒã ã¯ãŒã¯ãšããã£ãã«éã®æ©èœç«¶åã匷åããããã« ConvNeXt ã¢ãŒããã¯ãã£ã«è¿œå ã§ããæ°ãã Global Response Normalization (GRN) å±€ãææ¡ããŸãããã®èªå·±æåž«ããåŠç¿ææ³ãšã¢ãŒããã¯ãã£ã®æ¹åã®å
±åèšèšã«ãããConvNeXt V2 ãšåŒã°ããæ°ããã¢ãã« ãã¡ããªãèªçããŸãããããã«ãããImageNet åé¡ãCOCO æ€åºãADE20K ã»ã°ã¡ã³ããŒã·ã§ã³ãªã©ã®ããŸããŸãªèªèãã³ãããŒã¯ã«ãããçŽç²ãª ConvNet ã®ããã©ãŒãã³ã¹ã倧å¹
ã«åäžããŸãããŸããImageNet ã§ããã 1 ã®ç²ŸåºŠ 76.7% ãèªãå¹çç㪠370 äžãã©ã¡ãŒã¿ã® Atto ã¢ãã«ãããæå
端㮠88.9% ãéæãã 650M Huge ã¢ãã«ãŸã§ãããŸããŸãªãµã€ãºã®äºåãã¬ãŒãã³ã°æžã¿ ConvNeXt V2 ã¢ãã«ãæäŸããŠããŸããå
¬éãã¬ãŒãã³ã° ããŒã¿ã®ã¿ã䜿çšãã粟床*ã
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/convnextv2_architecture.png"
alt="æç»" width="600"/>
<small> ConvNeXt V2 ã¢ãŒããã¯ãã£ã <a href="https://arxiv.org/abs/2301.00808">å
ã®è«æ</a>ããæç²ã</small>
ãã®ã¢ãã«ã¯ [adirik](https://huggingface.co/adirik) ã«ãã£ãŠæäŸãããŸãããå
ã®ã³ãŒã㯠[ãã¡ã](https://github.com/facebookresearch/ConvNeXt-V2) ã«ãããŸãã
## Resources
ConvNeXt V2 ã®äœ¿çšãéå§ããã®ã«åœ¹ç«ã€å
¬åŒ Hugging Face ããã³ã³ãã¥ãã㣠(ð ã§ç€ºããã) ãªãœãŒã¹ã®ãªã¹ãã
<PipelineTag pipeline="image-classification"/>
- [`ConvNextV2ForImageClassification`] ã¯ããã® [ãµã³ãã« ã¹ã¯ãªãã](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) ããã³ [ããŒãããã¯](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb)ã
ããã«å«ãããªãœãŒã¹ã®éä¿¡ã«èå³ãããå Žåã¯ããæ°è»œã«ãã« ãªã¯ãšã¹ããéããŠãã ããã審æ»ãããŠããã ããŸãããªãœãŒã¹ã¯ãæ¢åã®ãªãœãŒã¹ãè€è£œããã®ã§ã¯ãªããäœãæ°ãããã®ã瀺ãããšãçæ³çã§ãã
## ConvNextV2Config
[[autodoc]] ConvNextV2Config
## ConvNextV2Model
[[autodoc]] ConvNextV2Model
- forward
## ConvNextV2ForImageClassification
[[autodoc]] ConvNextV2ForImageClassification
- forward
## TFConvNextV2Model
[[autodoc]] TFConvNextV2Model
- call
## TFConvNextV2ForImageClassification
[[autodoc]] TFConvNextV2ForImageClassification
- call
| 0 |
hf_public_repos/transformers/docs/source/ja | hf_public_repos/transformers/docs/source/ja/model_doc/bert-generation.md | <!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# BertGeneration
## Overview
BertGeneration ã¢ãã«ã¯ã次ã䜿çšããŠã·ãŒã±ã³ã¹éã®ã¿ã¹ã¯ã«å©çšã§ãã BERT ã¢ãã«ã§ãã
[Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) ã§ææ¡ãããŠãã [`EncoderDecoderModel`]
ã¿ã¹ã¯ãSascha RotheãSishi NagayanãAliaksei Severyn èã
è«æã®èŠçŽã¯æ¬¡ã®ãšããã§ãã
*å€§èŠæš¡ãªãã¥ãŒã©ã« ã¢ãã«ã®æåž«ãªãäºåãã¬ãŒãã³ã°ã¯ãæè¿ãèªç¶èšèªåŠçã«é©åœããããããŸãããã«ãã
NLP å®è·µè
ã¯ãå
¬éããããã§ãã¯ãã€ã³ããããŠã©ãŒã ã¹ã¿ãŒãããŠãè€æ°ã®é
ç®ã§æå
ç«¯ã®æè¡ãæšé²ããŠããŸããã
ã³ã³ãã¥ãŒãã£ã³ã°æéã倧å¹
ã«ç¯çŽããªãããã³ãããŒã¯ãå®è¡ããŸãããããŸã§ã®ãšãããäž»ã«èªç¶èšèªã«çŠç¹ãåœãŠãŠããŸããã
ã¿ã¹ã¯ãçè§£ããããã®è«æã§ã¯ãã·ãŒã±ã³ã¹çæã®ããã®äºåãã¬ãŒãã³ã°ããããã§ãã¯ãã€ã³ãã®æå¹æ§ãå®èšŒããŸããç§ãã¡ã¯
å
¬éãããŠããäºåãã¬ãŒãã³ã°æžã¿ BERT ãšäºææ§ã®ãã Transformer ããŒã¹ã®ã·ãŒã±ã³ã¹éã¢ãã«ãéçºããŸããã
GPT-2 ããã³ RoBERTa ãã§ãã¯ãã€ã³ãã䜿çšããã¢ãã«ã®åæåã®æçšæ§ã«ã€ããŠåºç¯ãªå®èšŒç ç©¶ã宿œããŸããã
ãšã³ã³ãŒããšãã³ãŒãããããã®ãã§ãã¯ãã€ã³ããç§ãã¡ã®ã¢ãã«ã¯ãæ©æ¢°ç¿»èš³ã«é¢ããæ°ããæå
端ã®çµæããããããŸãã
ããã¹ãã®èŠçŽãæã®åå²ãããã³æã®èåã*
## Usage examples and tips
- ã¢ãã«ã [`EncoderDecoderModel`] ãšçµã¿åãããŠäœ¿çšââããŠã2 ã€ã®äºåãã¬ãŒãã³ã°ãããã¢ãã«ã掻çšã§ããŸãã
åŸç¶ã®åŸ®èª¿æŽã®ããã® BERT ãã§ãã¯ãã€ã³ãã
```python
>>> # leverage checkpoints for Bert2Bert model...
>>> # use BERT's cls token as BOS token and sep token as EOS token
>>> encoder = BertGenerationEncoder.from_pretrained("bert-large-uncased", bos_token_id=101, eos_token_id=102)
>>> # add cross attention layers and use BERT's cls token as BOS token and sep token as EOS token
>>> decoder = BertGenerationDecoder.from_pretrained(
... "bert-large-uncased", add_cross_attention=True, is_decoder=True, bos_token_id=101, eos_token_id=102
... )
>>> bert2bert = EncoderDecoderModel(encoder=encoder, decoder=decoder)
>>> # create tokenizer...
>>> tokenizer = BertTokenizer.from_pretrained("bert-large-uncased")
>>> input_ids = tokenizer(
... "This is a long article to summarize", add_special_tokens=False, return_tensors="pt"
... ).input_ids
>>> labels = tokenizer("This is a short summary", return_tensors="pt").input_ids
>>> # train...
>>> loss = bert2bert(input_ids=input_ids, decoder_input_ids=labels, labels=labels).loss
>>> loss.backward()
```
- äºåãã¬ãŒãã³ã°ããã [`EncoderDecoderModel`] ãã¢ãã« ããã§çŽæ¥å©çšã§ããŸãã
```python
>>> # instantiate sentence fusion model
>>> sentence_fuser = EncoderDecoderModel.from_pretrained("google/roberta2roberta_L-24_discofuse")
>>> tokenizer = AutoTokenizer.from_pretrained("google/roberta2roberta_L-24_discofuse")
>>> input_ids = tokenizer(
... "This is the first sentence. This is the second sentence.", add_special_tokens=False, return_tensors="pt"
... ).input_ids
>>> outputs = sentence_fuser.generate(input_ids)
>>> print(tokenizer.decode(outputs[0]))
```
ãããïŒ
- [`BertGenerationEncoder`] ãš [`BertGenerationDecoder`] ã¯ã
[`EncoderDecoder`] ãšçµã¿åãããŸãã
- èŠçŽãæã®åå²ãæã®èåãããã³ç¿»èš³ã®å Žåãå
¥åã«ç¹å¥ãªããŒã¯ã³ã¯å¿
èŠãããŸããã
ãããã£ãŠãå
¥åã®æ«å°Ÿã« EOS ããŒã¯ã³ã远å ããªãã§ãã ããã
ãã®ã¢ãã«ã¯ã[patrickvonplaten](https://huggingface.co/patrickvonplaten) ã«ãã£ãŠæäŸãããŸãããå
ã®ã³ãŒãã¯æ¬¡ã®ãšããã§ã
[ãã](https://tfhub.dev/s?module-type=text-generation&subtype=module,placeholder) ããããŸãã
## BertGenerationConfig
[[autodoc]] BertGenerationConfig
## BertGenerationTokenizer
[[autodoc]] BertGenerationTokenizer
- save_vocabulary
## BertGenerationEncoder
[[autodoc]] BertGenerationEncoder
- forward
## BertGenerationDecoder
[[autodoc]] BertGenerationDecoder
- forward
| 0 |
hf_public_repos/transformers/docs/source/ja | hf_public_repos/transformers/docs/source/ja/model_doc/albert.md | <!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# ALBERT
<div class="flex flex-wrap space-x-1">
<a href="https://huggingface.co/models?filter=albert">
<img alt="Models" src="https://img.shields.io/badge/All_model_pages-albert-blueviolet">
</a>
<a href="https://huggingface.co/spaces/docs-demos/albert-base-v2">
<img alt="Spaces" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue">
</a>
</div>
## æŠèŠ
ALBERTã¢ãã«ã¯ãã[ALBERT: A Lite BERT for Self-supervised Learning of Language Representations](https://arxiv.org/abs/1909.11942)ããšããè«æã§Zhenzhong LanãMingda ChenãSebastian GoodmanãKevin GimpelãPiyush SharmaãRadu Soricutã«ãã£ãŠææ¡ãããŸãããBERTã®ã¡ã¢ãªæ¶è²»ãæžãããã¬ãŒãã³ã°ãé«éåããããã®ãã©ã¡ãŒã¿åæžæè¡ã2ã€ç€ºããŠããŸãïŒ
- åã蟌ã¿è¡åã2ã€ã®å°ããªè¡åã«åå²ããã
- ã°ã«ãŒãéã§åå²ãããç¹°ãè¿ãå±€ã䜿çšããã
è«æã®èŠæšã¯ä»¥äžã®éãã§ãïŒ
*èªç¶èšèªè¡šçŸã®äºååŠç¿æã«ã¢ãã«ã®ãµã€ãºãå¢ãããšãäžæµã¿ã¹ã¯ã®ããã©ãŒãã³ã¹ãåäžããããšããã°ãã°ãããŸããããããããæç¹ã§ãããªãã¢ãã«ã®å¢å€§ã¯ãGPU/TPUã®ã¡ã¢ãªå¶éãé·ãèšç·Žæéãäºæãã¬ã¢ãã«ã®å£åãšãã£ãåé¡ã®ããã«å°é£ã«ãªããŸãããããã®åé¡ã«å¯ŸåŠããããã«ãæã
ã¯BERTã®ã¡ã¢ãªæ¶è²»ãäœæžããèšç·Žé床ãé«ããããã®2ã€ã®ãã©ã¡ãŒã¿åæžæè¡ãææ¡ããŸããå
æ¬çãªå®èšŒç蚌æ ã¯ãæã
ã®ææ¡æ¹æ³ãå
ã®BERTã«æ¯ã¹ãŠã¯ããã«ããã¹ã±ãŒã«ããã¢ãã«ãçã¿åºãããšã瀺ããŠããŸãããŸããæéã®äžè²«æ§ãã¢ããªã³ã°ã«çŠç¹ãåœãŠãèªå·±æåž«ããæå€±ã䜿çšããè€æ°ã®æãå«ãŸããäžæµã¿ã¹ã¯ã«äžè²«ããŠå©ããšãªãããšã瀺ããŸãããã®çµæãæã
ã®æè¯ã®ã¢ãã«ã¯ãBERT-largeã«æ¯ã¹ãŠãã©ã¡ãŒã¿ãå°ãªãã«ãããããããGLUEãRACEãSQuADãã³ãããŒã¯ã§æ°ããªæå
端ã®çµæã確ç«ããŸãã*
ãã®ã¢ãã«ã¯[lysandre](https://huggingface.co/lysandre)ã«ããæäŸãããŸããããã®ã¢ãã«ã®jaxããŒãžã§ã³ã¯[kamalkraj](https://huggingface.co/kamalkraj)ã«ããæäŸãããŸããããªãªãžãã«ã®ã³ãŒãã¯[ãã¡ã](https://github.com/google-research/ALBERT)ã§èŠãããšãã§ããŸãã
## 䜿çšäžã®ãã³ã
- ALBERTã¯çµ¶å¯Ÿäœçœ®åã蟌ã¿ã䜿çšããã¢ãã«ãªã®ã§ãéåžžãå
¥åãå·ŠåŽã§ã¯ãªãå³åŽã«ããã£ã³ã°ããããšãæšå¥šãããŸãã
- ALBERTã¯ç¹°ãè¿ãå±€ã䜿çšããããã¡ã¢ãªäœ¿çšéã¯å°ãããªããŸãããåãæ°ã®ïŒç¹°ãè¿ãïŒå±€ãå埩ããªããã°ãªããªããããé ãå±€ã®æ°ãåãã§ããã°BERTã®ãããªã¢ãŒããã¯ãã£ãšåæ§ã®èšç®ã³ã¹ããããããŸãã
- åã蟌ã¿ãµã€ãºEã¯é ããµã€ãºHãšç°ãªããŸãããããã¯åã蟌ã¿ãæèã«äŸåããªãïŒäžã€ã®åã蟌ã¿ãã¯ãã«ãäžã€ã®ããŒã¯ã³ã衚ãïŒã®ã«å¯Ÿããé ãç¶æ
ã¯æèã«äŸåããïŒ1ã€ã®é ãç¶æ
ãããŒã¯ã³ç³»åã衚ãïŒãããH >> Eãšããããšãããè«ççã§ãããŸããåã蟌ã¿è¡åã®ãµã€ãºã¯V x Eãšå€§ããã§ãïŒVã¯èªåœãµã€ãºïŒãE < Hã§ããã°ããã©ã¡ãŒã¿ã¯å°ãªããªããŸãã
- å±€ã¯ãã©ã¡ãŒã¿ãå
±æããã°ã«ãŒãã«åå²ãããŠããŸãïŒã¡ã¢ãªç¯çŽã®ããïŒã次æäºæž¬ïŒNSP: Next Sentence PredictionïŒã¯æã®é åºäºæž¬ã«çœ®ãæããããŸãïŒå
¥åã§ã¯ã2ã€ã®æAãšBïŒãããã¯é£ç¶ããŠããïŒããããAã«ç¶ããŠBãäžããããBã«ç¶ããŠAãäžããŸããã¢ãã«ã¯ããããå
¥ãæ¿ãã£ãŠãããã©ãããäºæž¬ããå¿
èŠããããŸãã
## åèè³æ
- [ããã¹ãåé¡ã¿ã¹ã¯ã¬ã€ã](../tasks/sequence_classification)
- [ããŒã¯ã³åé¡ã¿ã¹ã¯ã¬ã€ã](../tasks/token_classification)
- [質åå¿çã¿ã¹ã¯ã¬ã€ã](../tasks/question_answering)
- [ãã¹ã¯ãããèšèªã¢ãã«ã¿ã¹ã¯ã¬ã€ã](../tasks/masked_language_modeling)
- [å€è¢éžæã¿ã¹ã¯ã¬ã€ã](../tasks/multiple_choice)
## AlbertConfig
[[autodoc]] AlbertConfig
## AlbertTokenizer
[[autodoc]] AlbertTokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
## AlbertTokenizerFast
[[autodoc]] AlbertTokenizerFast
## Albert specific outputs
[[autodoc]] models.albert.modeling_albert.AlbertForPreTrainingOutput
[[autodoc]] models.albert.modeling_tf_albert.TFAlbertForPreTrainingOutput
<frameworkcontent>
<pt>
## AlbertModel
[[autodoc]] AlbertModel
- forward
## AlbertForPreTraining
[[autodoc]] AlbertForPreTraining
- forward
## AlbertForMaskedLM
[[autodoc]] AlbertForMaskedLM
- forward
## AlbertForSequenceClassification
[[autodoc]] AlbertForSequenceClassification
- forward
## AlbertForMultipleChoice
[[autodoc]] AlbertForMultipleChoice
## AlbertForTokenClassification
[[autodoc]] AlbertForTokenClassification
- forward
## AlbertForQuestionAnswering
[[autodoc]] AlbertForQuestionAnswering
- forward
</pt>
<tf>
## TFAlbertModel
[[autodoc]] TFAlbertModel
- call
## TFAlbertForPreTraining
[[autodoc]] TFAlbertForPreTraining
- call
## TFAlbertForMaskedLM
[[autodoc]] TFAlbertForMaskedLM
- call
## TFAlbertForSequenceClassification
[[autodoc]] TFAlbertForSequenceClassification
- call
## TFAlbertForMultipleChoice
[[autodoc]] TFAlbertForMultipleChoice
- call
## TFAlbertForTokenClassification
[[autodoc]] TFAlbertForTokenClassification
- call
## TFAlbertForQuestionAnswering
[[autodoc]] TFAlbertForQuestionAnswering
- call
</tf>
<jax>
## FlaxAlbertModel
[[autodoc]] FlaxAlbertModel
- __call__
## FlaxAlbertForPreTraining
[[autodoc]] FlaxAlbertForPreTraining
- __call__
## FlaxAlbertForMaskedLM
[[autodoc]] FlaxAlbertForMaskedLM
- __call__
## FlaxAlbertForSequenceClassification
[[autodoc]] FlaxAlbertForSequenceClassification
- __call__
## FlaxAlbertForMultipleChoice
[[autodoc]] FlaxAlbertForMultipleChoice
- __call__
## FlaxAlbertForTokenClassification
[[autodoc]] FlaxAlbertForTokenClassification
- __call__
## FlaxAlbertForQuestionAnswering
[[autodoc]] FlaxAlbertForQuestionAnswering
- __call__
</jax>
</frameworkcontent>
| 0 |
hf_public_repos/transformers/docs/source/ja | hf_public_repos/transformers/docs/source/ja/model_doc/autoformer.md | <!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Autoformer
## æŠèŠ
Autoformerã¢ãã«ã¯ãã[Autoformer: Decomposition Transformers with Auto-Correlation for Long-Term Series Forecasting](https://arxiv.org/abs/2106.13008)ããšããè«æã§Haixu WuãJiehui XuãJianmin WangãMingsheng Longã«ãã£ãŠææ¡ãããŸããã
ãã®ã¢ãã«ã¯ãäºæž¬ããã»ã¹äžã«ãã¬ã³ããšå£ç¯æ§æåã鿬¡çã«åè§£ã§ããæ·±å±€åè§£ã¢ãŒããã¯ãã£ãšããŠTransformerãå¢åŒ·ããŸãã
è«æã®èŠæšã¯ä»¥äžã®éãã§ãïŒ
*äŸãã°ç°åžžæ°è±¡ã®æ©æèŠåãé·æçãªãšãã«ã®ãŒæ¶è²»èšç»ãšãã£ãå®å¿çšã«ãããŠãäºæž¬æéãå»¶é·ããããšã¯éèŠãªèŠæ±ã§ããæ¬è«æã§ã¯ãæç³»åã®é·æäºæž¬åé¡ãç ç©¶ããŠããŸãã以åã®TransformerããŒã¹ã®ã¢ãã«ã¯ãé·è·é¢äŸåé¢ä¿ãçºèŠããããã«æ§ã
ãªã»ã«ãã¢ãã³ã·ã§ã³æ©æ§ãæ¡çšããŠããŸããããããé·ææªæ¥ã®è€éãªæéçãã¿ãŒã³ã«ãã£ãŠã¢ãã«ãä¿¡é Œã§ããäŸåé¢ä¿ãèŠã€ããããšã劚ããããŸãããŸããTransformerã¯ãé·ãç³»åã®å¹çåã®ããã«ãã€ã³ãã¯ã€ãºãªã»ã«ãã¢ãã³ã·ã§ã³ã®ã¹ããŒã¹ããŒãžã§ã³ãæ¡çšããå¿
èŠããããæ
å ±å©çšã®ããã«ããã¯ãšãªããŸããTransformerãè¶
ããŠãæã
ã¯èªå·±çžé¢æ©æ§ãæã€æ°ããåè§£ã¢ãŒããã¯ãã£ãšããŠAutoformerãèšèšããŸãããç³»ååè§£ã®äºååŠçã®æ
£è¡ãç Žãããããæ·±å±€ã¢ãã«ã®åºæ¬çãªå
éšãããã¯ãšããŠé©æ°ããŸãããã®èšèšã¯ãè€éãªæç³»åã«å¯ŸããAutoformerã®é²è¡çãªåè§£èœåã匷åããŸããããã«ã確çéçšçè«ã«è§ŠçºãããŠãç³»åã®åšææ§ã«åºã¥ããèªå·±çžé¢æ©æ§ãèšèšãããµãç³»åã¬ãã«ã§ã®äŸåé¢ä¿ã®çºèŠãšè¡šçŸã®éçŽãè¡ããŸããèªå·±çžé¢ã¯å¹çãšç²ŸåºŠã®äž¡æ¹ã§ã»ã«ãã¢ãã³ã·ã§ã³ãäžåããŸããé·æäºæž¬ã«ãããŠãAutoformerã¯ããšãã«ã®ãŒã亀éãçµæžãæ°è±¡ãçŸç
ã®5ã€ã®å®çšçãªå¿çšãã«ããŒãã6ã€ã®ãã³ãããŒã¯ã§38%ã®çžå¯Ÿçãªæ¹åããããããæå
端ã®ç²ŸåºŠãéæããŸãã*
ãã®ã¢ãã«ã¯[elisim](https://huggingface.co/elisim)ãš[kashif](https://huggingface.co/kashif)ããæäŸãããŸããã
ãªãªãžãã«ã®ã³ãŒãã¯[ãã¡ã](https://github.com/thuml/Autoformer)ã§èŠãããšãã§ããŸãã
## åèè³æ
Autoformerã®äœ¿çšãéå§ããã®ã«åœ¹ç«ã€å
¬åŒã®Hugging Faceããã³ã³ãã¥ããã£ïŒðã§ç€ºãããŠããïŒã®åèè³æã®äžèЧã§ããããã«åèè³æãæåºãããå Žåã¯ãæ°å
ŒããªãPull RequestãéããŠãã ãããç§ãã¡ã¯ãããã¬ãã¥ãŒããããŸãïŒåèè³æã¯ãæ¢åã®ãã®ãè€è£œããã®ã§ã¯ãªããäœãæ°ããããšã瀺ãããšãçæ³çã§ãã
- HuggingFaceããã°ã§Autoformerã«é¢ããããã°èšäºããã§ãã¯ããŠãã ããïŒ[ã¯ããTransformersã¯æç³»åäºæž¬ã«å¹æçã§ãïŒ+ AutoformerïŒ](https://huggingface.co/blog/autoformer)
## AutoformerConfig
[[autodoc]] AutoformerConfig
## AutoformerModel
[[autodoc]] AutoformerModel
- forward
## AutoformerForPrediction
[[autodoc]] AutoformerForPrediction
- forward
| 0 |
hf_public_repos/transformers/docs/source/ja | hf_public_repos/transformers/docs/source/ja/model_doc/clipseg.md | <!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# CLIPSeg
## Overview
CLIPSeg ã¢ãã«ã¯ãTimo LÃŒddecke, Alexander Ecker ã«ãã£ãŠ [Image Segmentation using Text and Image Prompts](https://arxiv.org/abs/2112.10003) ã§ææ¡ãããŸããã
ãããŠã¢ã¬ã¯ãµã³ããŒã»ãšãã«ãŒã CLIPSeg ã¯ããŒãã·ã§ããããã³ã¯ã³ã·ã§ããç»åã»ã°ã¡ã³ããŒã·ã§ã³ã®ããã«ãåçµããã [CLIP](clip) ã¢ãã«ã®äžã«æå°éã®ãã³ãŒãã远å ããŸãã
è«æã®èŠçŽã¯æ¬¡ã®ãšããã§ãã
*ç»åã®ã»ã°ã¡ã³ããŒã·ã§ã³ã¯éåžžããã¬ãŒãã³ã°ã«ãã£ãŠè§£æ±ºãããŸãã
ãªããžã§ã¯ã ã¯ã©ã¹ã®åºå®ã»ããã®ã¢ãã«ãåŸã§è¿œå ã®ã¯ã©ã¹ãããè€éãªã¯ãšãªãçµã¿èŸŒããšã³ã¹ããããããŸã
ãããã®åŒãå«ãããŒã¿ã»ããã§ã¢ãã«ãåãã¬ãŒãã³ã°ããå¿
èŠãããããã§ããããã§ã·ã¹ãã ãææ¡ããŸã
ä»»æã®æ
å ±ã«åºã¥ããŠç»åã»ã°ã¡ã³ããŒã·ã§ã³ãçæã§ããŸãã
ãã¹ãæã«ããã³ãââãã衚瀺ãããŸããããã³ããã¯ããã¹ããŸãã¯
ç»åããã®ã¢ãããŒãã«ãããçµ±äžãããã¢ãã«ãäœæã§ããŸãã
3 ã€ã®äžè¬çãªã»ã°ã¡ã³ããŒã·ã§ã³ ã¿ã¹ã¯ã«ã€ã㊠(1 åãã¬ãŒãã³ã°æžã¿)
åç
§åŒã®ã»ã°ã¡ã³ããŒã·ã§ã³ããŒãã·ã§ãã ã»ã°ã¡ã³ããŒã·ã§ã³ãã¯ã³ã·ã§ãã ã»ã°ã¡ã³ããŒã·ã§ã³ãšããæç¢ºãªèª²é¡ã䌎ããŸãã
CLIP ã¢ãã«ãããã¯ããŒã³ãšããŠæ§ç¯ããããããã©ã³ã¹ããŒã¹ã®ãã³ãŒãã§æ¡åŒµããŠãé«å¯åºŠãªããŒã¿éä¿¡ãå¯èœã«ããŸãã
äºæž¬ãã®æ¡åŒµããŒãžã§ã³ã§ãã¬ãŒãã³ã°ããåŸã
PhraseCut ããŒã¿ã»ãããç§ãã¡ã®ã·ã¹ãã ã¯ãããªãŒããã¹ã ããã³ãããŸãã¯
ã¯ãšãªã衚ã远å ã®ç»åãåŸè
ã®ç»åããŒã¹ã®ããã³ããã®ããŸããŸãªããªãšãŒã·ã§ã³ã詳现ã«åæããŸãã
ãã®æ°ãããã€ããªããå
¥åã«ãããåçé©å¿ãå¯èœã«ãªããŸãã
åè¿°ã® 3 ã€ã®ã»ã°ã¡ã³ããŒã·ã§ã³ ã¿ã¹ã¯ã®ã¿ã§ããã
ããã¹ããŸãã¯ç»åãã¯ãšãªãããã€ã㪠ã»ã°ã¡ã³ããŒã·ã§ã³ ã¿ã¹ã¯ã«
å®åŒåããããšãã§ãããæåŸã«ãã·ã¹ãã ãããŸãé©å¿ããŠããããšãããããŸãã
ã¢ãã©ãŒãã³ã¹ãŸãã¯ããããã£ãå«ãäžè¬åãããã¯ãšãª*
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/clipseg_architecture.png"
alt="æç»" width="600"/>
<small> CLIPSeg ã®æŠèŠã <a href="https://arxiv.org/abs/2112.10003">å
ã®è«æããæç²ã</a> </small>
ãã®ã¢ãã«ã¯ã[nielsr](https://huggingface.co/nielsr) ã«ãã£ãŠæäŸãããŸããã
å
ã®ã³ãŒã㯠[ãã](https://github.com/timojl/clipseg) ã«ãããŸãã
## Usage tips
- [`CLIPSegForImageSegmentation`] ã¯ã[`CLIPSegModel`] ã®äžã«ãã³ãŒãã远å ããŸããåŸè
㯠[`CLIPModel`] ãšåãã§ãã
- [`CLIPSegForImageSegmentation`] ã¯ããã¹ãæã«ä»»æã®ããã³ããã«åºã¥ããŠç»åã»ã°ã¡ã³ããŒã·ã§ã³ãçæã§ããŸããããã³ããã¯ããã¹ãã®ããããã§ã
(`input_ids` ãšããŠã¢ãã«ã«æäŸããã) ãŸãã¯ç»å (`conditional_pixel_values` ãšããŠã¢ãã«ã«æäŸããã)ãã«ã¹ã¿ã ãæäŸããããšãã§ããŸã
æ¡ä»¶ä»ãåã蟌㿠(`conditional_embeddings`ãšããŠã¢ãã«ã«æäŸãããŸã)ã
## Resources
CLIPSeg ã®äœ¿çšãéå§ããã®ã«åœ¹ç«ã€ãå
¬åŒ Hugging Face ããã³ã³ãã¥ãã㣠(ð ã§ç€ºãããŠãã) ãªãœãŒã¹ã®ãªã¹ããããã«å«ãããªãœãŒã¹ã®éä¿¡ã«èå³ãããå Žåã¯ããæ°è»œã«ãã« ãªã¯ãšã¹ããéããŠãã ããã審æ»ãããŠããã ããŸãããªãœãŒã¹ã¯ãæ¢åã®ãªãœãŒã¹ãè€è£œããã®ã§ã¯ãªããäœãæ°ãããã®ã瀺ãããšãçæ³çã§ãã
<PipelineTag pipeline="image-segmentation"/>
- [CLIPSeg ã䜿çšãããŒãã·ã§ããç»åã»ã°ã¡ã³ããŒã·ã§ã³](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/CLIPSeg/Zero_shot_image_segmentation_with_CLIPSeg.ipynb) ã説æããããŒãããã¯ã
## CLIPSegConfig
[[autodoc]] CLIPSegConfig
- from_text_vision_configs
## CLIPSegTextConfig
[[autodoc]] CLIPSegTextConfig
## CLIPSegVisionConfig
[[autodoc]] CLIPSegVisionConfig
## CLIPSegProcessor
[[autodoc]] CLIPSegProcessor
## CLIPSegModel
[[autodoc]] CLIPSegModel
- forward
- get_text_features
- get_image_features
## CLIPSegTextModel
[[autodoc]] CLIPSegTextModel
- forward
## CLIPSegVisionModel
[[autodoc]] CLIPSegVisionModel
- forward
## CLIPSegForImageSegmentation
[[autodoc]] CLIPSegForImageSegmentation
- forward | 0 |
hf_public_repos/transformers/docs/source/ja | hf_public_repos/transformers/docs/source/ja/model_doc/conditional_detr.md | <!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Conditional DETR
## Overview
æ¡ä»¶ä»ã DETR ã¢ãã«ã¯ã[Conditional DETR for Fast Training Convergence](https://arxiv.org/abs/2108.06152) ã§ Depu MengãXiaokang ChenãZejia FanãGang ZengãHouqiang LiãYuhui YuanãLei Sun, Jingdong Wang ã«ãã£ãŠææ¡ãããŸãããç京æ±ãæ¡ä»¶ä»ã DETR ã¯ãé«é DETR ãã¬ãŒãã³ã°ã®ããã®æ¡ä»¶ä»ãã¯ãã¹ã¢ãã³ã·ã§ã³ ã¡ã«ããºã ãæäŸããŸããæ¡ä»¶ä»ã DETR 㯠DETR ããã 6.7 åãã 10 åéãåæããŸãã
è«æã®èŠçŽã¯æ¬¡ã®ãšããã§ãã
*æè¿éçºããã DETR ã¢ãããŒãã¯ããã©ã³ã¹ãã©ãŒã㌠ãšã³ã³ãŒããŒããã³ãã³ãŒã㌠ã¢ãŒããã¯ãã£ãç©äœæ€åºã«é©çšããææãªããã©ãŒãã³ã¹ãå®çŸããŸãããã®è«æã§ã¯ããã¬ãŒãã³ã°ã®åæãé
ããšããéèŠãªåé¡ãæ±ããé«é DETR ãã¬ãŒãã³ã°ã®ããã®æ¡ä»¶ä»ãã¯ãã¹ã¢ãã³ã·ã§ã³ ã¡ã«ããºã ã玹ä»ããŸããç§ãã¡ã®ã¢ãããŒãã¯ãDETR ã«ãããã¯ãã¹ã¢ãã³ã·ã§ã³ã 4 ã€ã®åè¢ã®äœçœ®ç¹å®ãšããã¯ã¹ã®äºæž¬ã«ã³ã³ãã³ãã®åã蟌ã¿ã«å€§ããäŸåããŠãããããé«å質ã®ã³ã³ãã³ãã®åã蟌ã¿ã®å¿
èŠæ§ãé«ãŸãããã¬ãŒãã³ã°ã®é£æåºŠãé«ããªããšããç¹ã«åæ©ã¥ããããŠããŸããæ¡ä»¶ä»ã DETR ãšåŒã°ããç§ãã¡ã®ã¢ãããŒãã¯ããã³ãŒããŒã®ãã«ãããã ã¯ãã¹ã¢ãã³ã·ã§ã³ã®ããã«ãã³ãŒããŒã®åã蟌ã¿ããæ¡ä»¶ä»ãã®ç©ºéã¯ãšãªãåŠç¿ããŸããå©ç¹ã¯ãæ¡ä»¶ä»ã空éã¯ãšãªãéããŠãåã¯ãã¹ã¢ãã³ã·ã§ã³ ãããããåå¥ã®é å (ããšãã°ã1 ã€ã®ãªããžã§ã¯ãã®ç«¯ãŸãã¯ãªããžã§ã¯ã ããã¯ã¹å
ã®é å) ãå«ããã³ãã«æ³šç®ã§ããããšã§ããããã«ããããªããžã§ã¯ãåé¡ãšããã¯ã¹ååž°ã®ããã®åå¥ã®é åãããŒã«ã©ã€ãºããããã®ç©ºéç¯å²ãçãŸããã³ã³ãã³ãã®åã蟌ã¿ãžã®äŸåãç·©åããããã¬ãŒãã³ã°ã容æã«ãªããŸããå®éšçµæã¯ãæ¡ä»¶ä»ã DETR ãããã¯ããŒã³ R50 ããã³ R101 ã§ 6.7 åéãåæãããã匷åãªããã¯ããŒã³ DC5-R50 ããã³ DC5-R101 ã§ 10 åéãåæããããšã瀺ããŠããŸããã³ãŒã㯠https://github.com/Atten4Vis/ConditionalDETR ã§å
¥æã§ããŸãã*
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/conditional_detr_curve.jpg"
alt="æç»" width="600"/>
<small> æ¡ä»¶ä»ã DETR ã¯ãå
ã® DETR ã«æ¯ã¹ãŠã¯ããã«éãåæã瀺ããŸãã <a href="https://arxiv.org/abs/2108.06152">å
ã®è«æ</a>ããåŒçšã</small>
ãã®ã¢ãã«ã¯ [DepuMeng](https://huggingface.co/DepuMeng) ã«ãã£ãŠå¯çš¿ãããŸãããå
ã®ã³ãŒã㯠[ãã](https://github.com/Atten4Vis/ConditionalDETR) ã«ãããŸãã
## Resources
- [ãªããžã§ã¯ãæ€åºã¿ã¹ã¯ã¬ã€ã](../tasks/object_detection)
## ConditionalDetrConfig
[[autodoc]] ConditionalDetrConfig
## ConditionalDetrImageProcessor
[[autodoc]] ConditionalDetrImageProcessor
- preprocess
- post_process_object_detection
- post_process_instance_segmentation
- post_process_semantic_segmentation
- post_process_panoptic_segmentation
## ConditionalDetrFeatureExtractor
[[autodoc]] ConditionalDetrFeatureExtractor
- __call__
- post_process_object_detection
- post_process_instance_segmentation
- post_process_semantic_segmentation
- post_process_panoptic_segmentation
## ConditionalDetrModel
[[autodoc]] ConditionalDetrModel
- forward
## ConditionalDetrForObjectDetection
[[autodoc]] ConditionalDetrForObjectDetection
- forward
## ConditionalDetrForSegmentation
[[autodoc]] ConditionalDetrForSegmentation
- forward
| 0 |
hf_public_repos/transformers/docs/source/ja | hf_public_repos/transformers/docs/source/ja/model_doc/bigbird_pegasus.md | <!--Copyright 2021 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# BigBirdPegasus
## Overview
BigBird ã¢ãã«ã¯ã[Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) ã§ææ¡ãããŸããã
ã¶ããŒã«ããã³ãžã«ãšã°ã«ã¬ãã·ã¥ãã°ã«ãšããã€ãã¯ããŒã«ã»ã¢ãŽã£ããŽã¡ãšãšã€ã³ãºãªãŒããžã§ã·ã¥ã¢ãšã¢ã«ãã«ãã£ãã¯ãªã¹ãšãªã³ã¿ãã³ã
ãµã³ãã£ã¢ãŽãšãã¡ã ããã£ãªãããšã©ãã©ãã¢ãã«ãŒããšã¯ã³ãããŒãã¡ã³ãšã€ã³ããªãŒãªã©ã BigBird ã¯æ³šç®åºŠãäœã
BERT ãªã©ã® Transformer ããŒã¹ã®ã¢ãã«ãããã«é·ãã·ãŒã±ã³ã¹ã«æ¡åŒµãããTransformer ããŒã¹ã®ã¢ãã«ããŸã°ãã«å ããŠ
ã¢ãã³ã·ã§ã³ãšåæ§ã«ãBigBird ã¯å
¥åã·ãŒã±ã³ã¹ã«ã©ã³ãã ã¢ãã³ã·ã§ã³ã ãã§ãªãã°ããŒãã« ã¢ãã³ã·ã§ã³ãé©çšããŸããçè«çã«ã¯ã
ãŸã°ãã§å
šäœçã§ã©ã³ãã ãªæ³šæãé©çšãããšãå®å
šãªæ³šæã«è¿ã¥ãããšã瀺ãããŠããŸããã
é·ãã·ãŒã±ã³ã¹ã§ã¯èšç®å¹çã倧å¹
ã«åäžããŸããããé·ãã³ã³ããã¹ããåŠçã§ããæ©èœã®çµæãšããŠã
BigBird ã¯ã質åå¿çã
BERT ãŸã㯠RoBERTa ãšæ¯èŒããèŠçŽã
è«æã®èŠçŽã¯æ¬¡ã®ãšããã§ãã
*BERT ãªã©ã®ãã©ã³ã¹ãã©ãŒããŒããŒã¹ã®ã¢ãã«ã¯ãNLP ã§æãæåããæ·±å±€åŠç¿ã¢ãã«ã® 1 ã€ã§ãã
æ®å¿µãªããããããã®äžæ žçãªå¶éã® 1 ã€ã¯ãã·ãŒã±ã³ã¹ã«å¯Ÿããäºæ¬¡äŸåæ§ (äž»ã«ã¡ã¢ãªã«é¢ãã) ã§ãã
å®å
šãªæ³šæã¡ã«ããºã ã«ããé·ãã§ããããã解決ããããã«ãBigBird ã¯ããŸã°ããªæ³šæã¡ã«ããºã ãææ¡ããŸãã
ãã®äºæ¬¡äŸåé¢ä¿ãç·åœ¢ã«åæžããŸãã BigBird ãã·ãŒã±ã³ã¹é¢æ°ã®æ±çšè¿äŒŒåšã§ããããšã瀺ããŸãã
ãã¥ãŒãªã³ã°ã¯å®å
šã§ãããããäºæ¬¡å®å
šæ³šæã¢ãã«ã®ãããã®ç¹æ§ãä¿åãããŸããéäžãç§ãã¡ã®
çè«åæã«ãããO(1) åã®ã°ããŒãã« ããŒã¯ã³ (CLS ãªã©) ãæã€å©ç¹ã®äžéšãæããã«ãªãã
ã¹ããŒã¹æ³šæã¡ã«ããºã ã®äžéšãšããŠã®ã·ãŒã±ã³ã¹ãææ¡ãããã¹ããŒã¹ ã¢ãã³ã·ã§ã³ã¯ã次ã®é·ãã®ã·ãŒã±ã³ã¹ãåŠçã§ããŸãã
åæ§ã®ããŒããŠã§ã¢ã䜿çšããŠä»¥åã«å¯èœã§ãã£ããã®ã® 8 åãããé·ãã³ã³ããã¹ããåŠçã§ããæ©èœã®çµæãšããŠã
BigBird ã¯ã質åå¿çãèŠçŽãªã©ã®ããŸããŸãª NLP ã¿ã¹ã¯ã®ããã©ãŒãã³ã¹ã倧å¹
ã«åäžãããŸããç§éã
ã²ããã¯ã¹ããŒã¿ãžã®æ°ããã¢ããªã±ãŒã·ã§ã³ãææ¡ããŸãã*
## Usage tips
- BigBird ã®æ³šæãã©ã®ããã«æ©èœãããã«ã€ããŠã®è©³çްãªèª¬æã«ã€ããŠã¯ã[ãã®ããã°æçš¿](https://huggingface.co/blog/big-bird) ãåç
§ããŠãã ããã
- BigBird ã«ã¯ã**original_full** ãš **block_sparse** ã® 2 ã€ã®å®è£
ãä»å±ããŠããŸããã·ãŒã±ã³ã¹é·ã 1024 æªæºã®å Žåãæ¬¡ã䜿çšããŸãã
**block_sparse** ã䜿çšããŠãã¡ãªããããªãããã**original_full** ã䜿çšããããšããå§ãããŸãã
- ã³ãŒãã¯çŸåšã3 ãããã¯ãš 2 ã°ããŒãã« ãããã¯ã®ãŠã£ã³ã㊠ãµã€ãºã䜿çšããŠããŸãã
- ã·ãŒã±ã³ã¹ã®é·ãã¯ããã㯠ãµã€ãºã§å²ãåããå¿
èŠããããŸãã
- çŸåšã®å®è£
ã§ã¯ **ITC** ã®ã¿ããµããŒããããŠããŸãã
- çŸåšã®å®è£
ã§ã¯ **num_random_blocks = 0** ã¯ãµããŒããããŠããŸããã
- BigBirdPegasus 㯠[PegasusTokenizer](https://github.com/huggingface/transformers/blob/main/src/transformers/models/pegasus/tokenization_pegasus.py) ã䜿çšããŸãã
- BigBird ã¯çµ¶å¯Ÿäœçœ®åã蟌ã¿ãåããã¢ãã«ã§ãããããéåžžã¯å
¥åãå³åŽã«ããã£ã³ã°ããããšããå§ãããŸãã
å·Šã
å
ã®ã³ãŒã㯠[ãã¡ã](https://github.com/google-research/bigbird) ã«ãããŸãã
## ããã¥ã¡ã³ã ãªãœãŒã¹
- [ããã¹ãåé¡ã¿ã¹ã¯ã¬ã€ã](../tasks/sequence_classification)
- [質ååçã¿ã¹ã¯ ã¬ã€ã](../tasks/question_answering)
- [å æèšèªã¢ããªã³ã° ã¿ã¹ã¯ ã¬ã€ã](../tasks/language_modeling)
- [翻蚳ã¿ã¹ã¯ã¬ã€ã](../tasks/translation)
- [èŠçŽã¿ã¹ã¯ã¬ã€ã](../tasks/summarization)
## BigBirdPegasusConfig
[[autodoc]] BigBirdPegasusConfig
- all
## BigBirdPegasusModel
[[autodoc]] BigBirdPegasusModel
- forward
## BigBirdPegasusForConditionalGeneration
[[autodoc]] BigBirdPegasusForConditionalGeneration
- forward
## BigBirdPegasusForSequenceClassification
[[autodoc]] BigBirdPegasusForSequenceClassification
- forward
## BigBirdPegasusForQuestionAnswering
[[autodoc]] BigBirdPegasusForQuestionAnswering
- forward
## BigBirdPegasusForCausalLM
[[autodoc]] BigBirdPegasusForCausalLM
- forward
| 0 |
hf_public_repos/transformers/docs/source/ja | hf_public_repos/transformers/docs/source/ja/model_doc/bertweet.md | <!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# BERTweet
## Overview
BERTweet ã¢ãã«ã¯ãDat Quoc NguyenãThanh Vu ã«ãã£ãŠ [BERTweet: A pre-trained language model for English Tweets](https://www.aclweb.org/anthology/2020.emnlp-demos.2.pdf) ã§ææ¡ãããŸãããã¢ã³ã»ãã¥ã¢ã³ã»ã°ãšã³ããã
è«æã®èŠçŽã¯æ¬¡ã®ãšããã§ãã
*ç§ãã¡ã¯ãè±èªãã€ãŒãçšã«åããŠå
¬éãããå€§èŠæš¡ãªäºåãã¬ãŒãã³ã°æžã¿èšèªã¢ãã«ã§ãã BERTweet ã玹ä»ããŸããç§ãã¡ã®BERTweetã¯ã
BERT ããŒã¹ãšåãã¢ãŒããã¯ã㣠(Devlin et al., 2019) ã¯ãRoBERTa äºåãã¬ãŒãã³ã°æé (Liu et al.) ã䜿çšããŠãã¬ãŒãã³ã°ãããŸãã
al.ã2019ïŒãå®éšã§ã¯ãBERTweet ã匷åãªããŒã¹ã©ã€ã³ã§ãã RoBERTa ããŒã¹ããã³ XLM-R ããŒã¹ãäžåãããã©ãŒãã³ã¹ã瀺ãããšã瀺ãããŠããŸã (Conneau et al.,
2020)ã3 ã€ã®ãã€ãŒã NLP ã¿ã¹ã¯ã«ãããŠã以åã®æå
端ã¢ãã«ãããåªããããã©ãŒãã³ã¹çµæãåŸãããŸããã
åè©ã¿ã°ä»ããåºæè¡šçŸèªèããã³ããã¹ãåé¡ã*
## Usage example
```python
>>> import torch
>>> from transformers import AutoModel, AutoTokenizer
>>> bertweet = AutoModel.from_pretrained("vinai/bertweet-base")
>>> # For transformers v4.x+:
>>> tokenizer = AutoTokenizer.from_pretrained("vinai/bertweet-base", use_fast=False)
>>> # For transformers v3.x:
>>> # tokenizer = AutoTokenizer.from_pretrained("vinai/bertweet-base")
>>> # INPUT TWEET IS ALREADY NORMALIZED!
>>> line = "SC has first two presumptive cases of coronavirus , DHEC confirms HTTPURL via @USER :cry:"
>>> input_ids = torch.tensor([tokenizer.encode(line)])
>>> with torch.no_grad():
... features = bertweet(input_ids) # Models outputs are now tuples
>>> # With TensorFlow 2.0+:
>>> # from transformers import TFAutoModel
>>> # bertweet = TFAutoModel.from_pretrained("vinai/bertweet-base")
```
<Tip>
ãã®å®è£
ã¯ãããŒã¯ã³åæ¹æ³ãé€ã㊠BERT ãšåãã§ãã詳现ã«ã€ããŠã¯ã[BERT ããã¥ã¡ã³ã](bert) ãåç
§ããŠãã ããã
API ãªãã¡ã¬ã³ã¹æ
å ±ã
</Tip>
ãã®ã¢ãã«ã¯ [dqnguyen](https://huggingface.co/dqnguyen) ã«ãã£ãŠæäŸãããŸãããå
ã®ã³ãŒã㯠[ãã](https://github.com/VinAIResearch/BERTweet) ã«ãããŸãã
## BertweetTokenizer
[[autodoc]] BertweetTokenizer
| 0 |
hf_public_repos/transformers/docs/source/ja | hf_public_repos/transformers/docs/source/ja/model_doc/bridgetower.md | <!--Copyright 2023 The Intel Labs Team Authors, The Microsoft Research Team Authors and HuggingFace Inc. team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# BridgeTower
## Overview
BridgeTower ã¢ãã«ã¯ãXiao XuãChenfei WuãShachar RosenmanãVasudev LalãWanxiang CheãNan Duan [BridgeTower: Building Bridges Between Encoders in Vision-Language Representative Learning](https://arxiv.org/abs/2206.08657) ã§ææ¡ãããŸããããã¥ã¢ã³ããã®ã¢ãã«ã®ç®æšã¯ã
åãŠãã¢ãŒãã« ãšã³ã³ãŒããšã¯ãã¹ã¢ãŒãã« ãšã³ã³ãŒãã®éã®ããªããžã«ãããã¯ãã¹ã¢ãŒãã« ãšã³ã³ãŒãã®åå±€ã§ã®å
æ¬çãã€è©³çްãªå¯Ÿè©±ãå¯èœã«ãªãã远å ã®ããã©ãŒãã³ã¹ãšèšç®ã³ã¹ããã»ãšãã©ç¡èŠã§ããçšåºŠã§ãããŸããŸãªäžæµã¿ã¹ã¯ã§åªããããã©ãŒãã³ã¹ãå®çŸããŸãã
ãã®è«æã¯ [AAAI'23](https://aaai.org/Conferences/AAAI-23/) äŒè°ã«æ¡æãããŸããã
è«æã®èŠçŽã¯æ¬¡ã®ãšããã§ãã
*TWO-TOWER ã¢ãŒããã¯ãã£ãåããããžã§ã³èšèª (VL) ã¢ãã«ã¯ãè¿å¹Žã®èŠèŠèšèªè¡šçŸåŠç¿ã®äž»æµãšãªã£ãŠããŸãã
çŸåšã® VL ã¢ãã«ã¯ã軜éã®ãŠãã¢ãŒãã« ãšã³ã³ãŒããŒã䜿çšããŠããã£ãŒã ã¯ãã¹ã¢ãŒãã« ãšã³ã³ãŒããŒã§äž¡æ¹ã®ã¢ããªãã£ãåæã«æœåºãäœçœ®åãããèåããããšãåŠç¿ããããäºåã«ãã¬ãŒãã³ã°ããããã£ãŒã ãŠãã¢ãŒãã« ãšã³ã³ãŒããŒããæçµå±€ã®ãŠãã¢ãŒãã«è¡šçŸãäžéšã®ã¯ãã¹ã¢ãŒãã«ãšã³ã³ãŒããŒã
ã©ã¡ãã®ã¢ãããŒãããèŠèŠèšèªè¡šçŸã®åŠç¿ãå¶éããã¢ãã«ã®ããã©ãŒãã³ã¹ãå¶éããå¯èœæ§ããããŸãããã®è«æã§ã¯ããŠãã¢ãŒãã« ãšã³ã³ãŒãã®æäžäœå±€ãšã¯ãã¹ã¢ãŒãã« ãšã³ã³ãŒãã®åå±€ã®éã®æ¥ç¶ãæ§ç¯ããè€æ°ã®ããªããžå±€ãå°å
¥ãã BRIDGETOWER ãææ¡ããŸãã
ããã«ããã广çãªããã ã¢ããã®ã¯ãã¹ã¢ãŒãã«èª¿æŽãšãã¯ãã¹ã¢ãŒãã« ãšã³ã³ãŒããŒå
ã®äºåãã¬ãŒãã³ã°æžã¿ãŠãã¢ãŒãã« ãšã³ã³ãŒããŒã®ããŸããŸãªã»ãã³ãã£ã㯠ã¬ãã«ã®èŠèŠè¡šçŸãšããã¹ã衚çŸã®éã®èåãå¯èœã«ãªããŸãã BRIDGETOWER 㯠4M ç»åã®ã¿ã§äºåãã¬ãŒãã³ã°ãããŠãããããŸããŸãªäžæµã®èŠèŠèšèªã¿ã¹ã¯ã§æå
端ã®ããã©ãŒãã³ã¹ãå®çŸããŸãã
ç¹ã«ãVQAv2 ãã¹ãæšæºã»ããã§ã¯ãBRIDGETOWER 㯠78.73% ã®ç²ŸåºŠãéæããåãäºåãã¬ãŒãã³ã° ããŒã¿ãšã»ãŒç¡èŠã§ãã远å ãã©ã¡ãŒã¿ãšèšç®ã³ã¹ãã§ä»¥åã®æå
端ã¢ãã« METER ã 1.09% äžåããŸããã
ç¹ã«ãã¢ãã«ãããã«ã¹ã±ãŒãªã³ã°ãããšãBRIDGETOWER 㯠81.15% ã®ç²ŸåºŠãéæããæ¡éãã«å€§ããªããŒã¿ã»ããã§äºåãã¬ãŒãã³ã°ãããã¢ãã«ãäžåããŸããã*
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/bridgetower_architecture%20.jpg"
alt="drawing" width="600"/>
<small> ããªããžã¿ã¯ãŒ ã¢ãŒããã¯ãã£ã <a href="https://arxiv.org/abs/2206.08657">å
ã®è«æããæç²ã</a> </small>
ãã®ã¢ãã«ã¯ã[Anahita Bhiwandiwalla](https://huggingface.co/anahita-b)ã[Tiep Le](https://huggingface.co/Tile)ã[Shaoyen Tseng](https://huggingface.co/shaoyentïŒãå
ã®ã³ãŒã㯠[ãã](https://github.com/microsoft/BridgeTower) ã«ãããŸãã
## Usage tips and examples
BridgeTower ã¯ãããžã¥ã¢ã« ãšã³ã³ãŒããŒãããã¹ã ãšã³ã³ãŒããŒãããã³è€æ°ã®è»œéããªããž ã¬ã€ã€ãŒãåããã¯ãã¹ã¢ãŒãã« ãšã³ã³ãŒããŒã§æ§æãããŸãã
ãã®ã¢ãããŒãã®ç®æšã¯ãåãŠãã¢ãŒãã« ãšã³ã³ãŒããŒãšã¯ãã¹ã¢ãŒãã« ãšã³ã³ãŒããŒã®éã«ããªããžãæ§ç¯ããã¯ãã¹ã¢ãŒãã« ãšã³ã³ãŒããŒã®åå±€ã§å
æ¬çãã€è©³çްãªå¯Ÿè©±ãå¯èœã«ããããšã§ããã
ååãšããŠãææ¡ãããã¢ãŒããã¯ãã£ã§ã¯ãä»»æã®ããžã¥ã¢ã«ãããã¹ãããŸãã¯ã¯ãã¹ã¢ãŒãã« ãšã³ã³ãŒããé©çšã§ããŸãã
[`BridgeTowerProcessor`] ã¯ã[`RobertaTokenizer`] ãš [`BridgeTowerImageProcessor`] ãåäžã®ã€ã³ã¹ã¿ã³ã¹ã«ã©ããããäž¡æ¹ã®æ©èœãå®çŸããŸãã
ããã¹ãããšã³ã³ãŒãããç»åãããããçšæããŸãã
次ã®äŸã¯ã[`BridgeTowerProcessor`] ãš [`BridgeTowerForContrastiveLearning`] ã䜿çšããŠå¯Ÿç
§åŠç¿ãå®è¡ããæ¹æ³ã瀺ããŠããŸãã
```python
>>> from transformers import BridgeTowerProcessor, BridgeTowerForContrastiveLearning
>>> import requests
>>> from PIL import Image
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> texts = ["An image of two cats chilling on a couch", "A football player scoring a goal"]
>>> processor = BridgeTowerProcessor.from_pretrained("BridgeTower/bridgetower-large-itm-mlm-itc")
>>> model = BridgeTowerForContrastiveLearning.from_pretrained("BridgeTower/bridgetower-large-itm-mlm-itc")
>>> # forward pass
>>> scores = dict()
>>> for text in texts:
... # prepare inputs
... encoding = processor(image, text, return_tensors="pt")
... outputs = model(**encoding)
... scores[text] = outputs
```
次ã®äŸã¯ã[`BridgeTowerProcessor`] ãš [`BridgeTowerForImageAndTextRetrieval`] ã䜿çšããŠç»åããã¹ãã®ååŸãå®è¡ããæ¹æ³ã瀺ããŠããŸãã
```python
>>> from transformers import BridgeTowerProcessor, BridgeTowerForImageAndTextRetrieval
>>> import requests
>>> from PIL import Image
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> texts = ["An image of two cats chilling on a couch", "A football player scoring a goal"]
>>> processor = BridgeTowerProcessor.from_pretrained("BridgeTower/bridgetower-base-itm-mlm")
>>> model = BridgeTowerForImageAndTextRetrieval.from_pretrained("BridgeTower/bridgetower-base-itm-mlm")
>>> # forward pass
>>> scores = dict()
>>> for text in texts:
... # prepare inputs
... encoding = processor(image, text, return_tensors="pt")
... outputs = model(**encoding)
... scores[text] = outputs.logits[0, 1].item()
```
次ã®äŸã¯ã[`BridgeTowerProcessor`] ãš [`BridgeTowerForMaskedLM`] ã䜿çšããŠãã¹ã¯ãããèšèªã¢ããªã³ã°ãå®è¡ããæ¹æ³ã瀺ããŠããŸãã
```python
>>> from transformers import BridgeTowerProcessor, BridgeTowerForMaskedLM
>>> from PIL import Image
>>> import requests
>>> url = "http://images.cocodataset.org/val2017/000000360943.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw).convert("RGB")
>>> text = "a <mask> looking out of the window"
>>> processor = BridgeTowerProcessor.from_pretrained("BridgeTower/bridgetower-base-itm-mlm")
>>> model = BridgeTowerForMaskedLM.from_pretrained("BridgeTower/bridgetower-base-itm-mlm")
>>> # prepare inputs
>>> encoding = processor(image, text, return_tensors="pt")
>>> # forward pass
>>> outputs = model(**encoding)
>>> results = processor.decode(outputs.logits.argmax(dim=-1).squeeze(0).tolist())
>>> print(results)
.a cat looking out of the window.
```
ãããïŒ
- BridgeTower ã®ãã®å®è£
ã§ã¯ã[`RobertaTokenizer`] ã䜿çšããŠããã¹ãåã蟌ã¿ãçæããOpenAI ã® CLIP/ViT ã¢ãã«ã䜿çšããŠèŠèŠçåã蟌ã¿ãèšç®ããŸãã
- äºåãã¬ãŒãã³ã°ããã [bridgeTower-base](https://huggingface.co/BridgeTower/bridgetower-base) ããã³ [bridgetower ãã¹ã¯ãããèšèªã¢ããªã³ã°ãšç»åããã¹ã ãããã³ã°](https://huggingface.co/BridgeTower/bridgetower--base-itm-mlm) ã®ãã§ãã¯ãã€ã³ã ããªãªãŒã¹ãããŸããã
- ç»åæ€çŽ¢ããã³ãã®ä»ã®äžæµã¿ã¹ã¯ã«ããã BridgeTower ã®ããã©ãŒãã³ã¹ã«ã€ããŠã¯ã[衚 5](https://arxiv.org/pdf/2206.08657.pdf) ãåç
§ããŠãã ããã
- ãã®ã¢ãã«ã® PyTorch ããŒãžã§ã³ã¯ãtorch 1.10 以éã§ã®ã¿äœ¿çšã§ããŸãã
## BridgeTowerConfig
[[autodoc]] BridgeTowerConfig
## BridgeTowerTextConfig
[[autodoc]] BridgeTowerTextConfig
## BridgeTowerVisionConfig
[[autodoc]] BridgeTowerVisionConfig
## BridgeTowerImageProcessor
[[autodoc]] BridgeTowerImageProcessor
- preprocess
## BridgeTowerProcessor
[[autodoc]] BridgeTowerProcessor
- __call__
## BridgeTowerModel
[[autodoc]] BridgeTowerModel
- forward
## BridgeTowerForContrastiveLearning
[[autodoc]] BridgeTowerForContrastiveLearning
- forward
## BridgeTowerForMaskedLM
[[autodoc]] BridgeTowerForMaskedLM
- forward
## BridgeTowerForImageAndTextRetrieval
[[autodoc]] BridgeTowerForImageAndTextRetrieval
- forward
| 0 |
hf_public_repos/transformers/docs/source/ja | hf_public_repos/transformers/docs/source/ja/model_doc/bart.md | <!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# BART
<div class="flex flex-wrap space-x-1">
<a href="https://huggingface.co/models?filter=bart">
<img alt="Models" src="https://img.shields.io/badge/All_model_pages-bart-blueviolet">
</a>
<a href="https://huggingface.co/spaces/docs-demos/bart-large-mnli">
<img alt="Spaces" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue">
</a>
</div>
**å
責äºé
:** äœãå¥åŠãªãã®ãèŠã€ããå Žåã¯ã[Github åé¡](https://github.com/huggingface/transformers/issues/new?assignees=&labels=&template=bug-report.md&title) ãæåºããå²ãåœãŠãŠãã ããã
@patrickvonplaten
## Overview
Bart ã¢ãã«ã¯ã[BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generationã
翻蚳ãšçè§£](https://arxiv.org/abs/1910.13461) Mike LewisãYinhan LiuãNaman GoyalãMarjan è
ã¬ãºããããžã£ããã¢ããã«ã©ããã³ã»ã¢ãã¡ãããªã¡ã«ã»ã¬ãŽã£ããã¹ã»ã¹ãã€ãããã«ãŒã¯ã»ãŒãã«ã¢ã€ã€ãŒã2019幎10æ29æ¥ã
èŠçŽã«ãããšã
- Bart ã¯ãåæ¹åãšã³ã³ãŒã (BERT ãªã©) ãåããæšæºã® seq2seq/æ©æ¢°ç¿»èš³ã¢ãŒããã¯ãã£ã䜿çšããŸãã
å·Šããå³ãžã®ãã³ãŒã (GPT ãªã©)ã
- äºåãã¬ãŒãã³ã° ã¿ã¹ã¯ã«ã¯ãå
ã®æã®é åºãã©ã³ãã ã«ã·ã£ããã«ããæ°ããåã蟌ã¿ã¹ããŒã ãå«ãŸããŸãã
ããã§ãããã¹ãã®ç¯å²ã¯åäžã®ãã¹ã¯ ããŒã¯ã³ã«çœ®ãæããããŸãã
- BART ã¯ãããã¹ãçæçšã«åŸ®èª¿æŽããå Žåã«ç¹ã«å¹æçã§ãããçè§£ã¿ã¹ã¯ã«ãé©ããŠããŸãããã
RoBERTa ã®ããã©ãŒãã³ã¹ã GLUE ããã³ SQuAD ã®åçã®ãã¬ãŒãã³ã° ãªãœãŒã¹ãšåçã«ããæ°ããªææãéæããŸãã
ããŸããŸãªæœè±¡çãªå¯Ÿè©±ã質åå¿çãèŠçŽã¿ã¹ã¯ã«é¢ããæå
端ã®çµæãåŸãããææãåŸãããŸãã
ã«ãŒãžã¥ã¯æå€§6æãŸã§ã
ãããïŒ
- BART ã¯çµ¶å¯Ÿäœçœ®åã蟌ã¿ãåããã¢ãã«ã§ãããããéåžžã¯å
¥åãå³åŽã«ããã£ã³ã°ããããšããå§ãããŸãã
å·Šã
- ãšã³ã³ãŒããŒãšãã³ãŒããŒãåããã·ãŒã±ã³ã¹ããŒã·ãŒã±ã³ã¹ ã¢ãã«ããšã³ã³ãŒãã«ã¯ç ŽæããããŒãžã§ã³ã®ããŒã¯ã³ãäŸçµŠããããã³ãŒãã«ã¯å
ã®ããŒã¯ã³ãäŸçµŠãããŸãïŒãã ããéåžžã®ãã©ã³ã¹ãã©ãŒã㌠ãã³ãŒããšåæ§ã«ãå°æ¥ã®ã¯ãŒããé ãããã®ãã¹ã¯ããããŸãïŒã次ã®å€æã®æ§æã¯ããšã³ã³ãŒããŒã®äºåãã¬ãŒãã³ã° ã¿ã¹ã¯ã«é©çšãããŸãã
* ã©ã³ãã ãªããŒã¯ã³ããã¹ã¯ããŸã (BERT ãšåæ§)
* ã©ã³ãã ãªããŒã¯ã³ãåé€ããŸã
* k åã®ããŒã¯ã³ã®ã¹ãã³ã 1 ã€ã®ãã¹ã¯ ããŒã¯ã³ã§ãã¹ã¯ããŸã (0 ããŒã¯ã³ã®ã¹ãã³ã¯ãã¹ã¯ ããŒã¯ã³ã®æ¿å
¥ã§ã)
* æãäžŠã¹æ¿ããŸã
* ããã¥ã¡ã³ããå転ããŠç¹å®ã®ããŒã¯ã³ããéå§ããããã«ããŸã
ãã®ã¢ãã«ã¯ [sshleifer](https://huggingface.co/sshleifer) ã«ãã£ãŠæäŸãããŸãããèè
ã®ã³ãŒã㯠[ãã](https://github.com/pytorch/fairseq/tree/master/examples/bart) ã«ãããŸãã
### Examples
- ã·ãŒã±ã³ã¹éã¿ã¹ã¯çšã® BART ããã³ãã®ä»ã®ã¢ãã«ã埮調æŽããããã®äŸãšã¹ã¯ãªããã¯ã次ã®å Žæã«ãããŸãã
[examples/pytorch/summarization/](https://github.com/huggingface/transformers/tree/main/examples/pytorch/summarization/README.md)ã
- Hugging Face `datasets` ã䜿çšã㊠[`BartForConditionalGeneration`] ããã¬ãŒãã³ã°ããæ¹æ³ã®äŸ
ãªããžã§ã¯ãã¯ããã® [ãã©ãŒã©ã ãã£ã¹ã«ãã·ã§ã³](https://discuss.huggingface.co/t/train-bart-for-conditional-generation-e-g-summarization/1904) ã§èŠã€ããããšãã§ããŸãã
- [æœåºããããã§ãã¯ãã€ã³ã](https://huggingface.co/models?search=distilbart) ã¯ããã® [è«æ](https://arxiv.org/abs/2010.13002) ã§èª¬æãããŠããŸãã
## Implementation Notes
- Bart ã¯ã·ãŒã±ã³ã¹ã®åé¡ã« `token_type_ids` ã䜿çšããŸããã [`BartTokenizer`] ã䜿çšãããã
[`~BartTokenizer.encode`] ã䜿çšããŠé©åã«åå²ããŸãã
- [`BartModel`] ã®ãã©ã¯ãŒããã¹ã¯ãæž¡ãããªãã£ãå Žåã`decoder_input_ids` ãäœæããŸãã
ããã¯ãä»ã®ã¢ããªã³ã° API ãšã¯ç°ãªããŸãããã®æ©èœã®äžè¬çãªäœ¿çšäŸã¯ããã¹ã¯ã®å¡ãã€ã¶ãã§ãã
- ã¢ãã«ã®äºæž¬ã¯ã次ã®å Žåã«å
ã®å®è£
ãšåäžã«ãªãããã«æå³ãããŠããŸãã
`forced_bos_token_id=0`ããã ããããã¯ãæž¡ãæååãæ¬¡ã®å Žåã«ã®ã¿æ©èœããŸãã
[`fairseq.encode`] ã¯ã¹ããŒã¹ã§å§ãŸããŸãã
- [`~generation.GenerationMixin.generate`] ã¯ã次ã®ãããªæ¡ä»¶ä»ãçæã¿ã¹ã¯ã«äœ¿çšããå¿
èŠããããŸãã
èŠçŽã«ã€ããŠã¯ããã® docstring ã®äŸãåç
§ããŠãã ããã
- *facebook/bart-large-cnn* éã¿ãããŒãããã¢ãã«ã«ã¯ `mask_token_id` ããªãããå®è¡ã§ããŸããã
ãã¹ã¯ãåããã¿ã¹ã¯ã
## Mask Filling
`facebook/bart-base` ããã³ `facebook/bart-large` ãã§ãã¯ãã€ã³ãã䜿çšããŠããã«ãããŒã¯ã³ ãã¹ã¯ãåããããšãã§ããŸãã
```python
from transformers import BartForConditionalGeneration, BartTokenizer
model = BartForConditionalGeneration.from_pretrained("facebook/bart-large", forced_bos_token_id=0)
tok = BartTokenizer.from_pretrained("facebook/bart-large")
example_english_phrase = "UN Chief Says There Is No <mask> in Syria"
batch = tok(example_english_phrase, return_tensors="pt")
generated_ids = model.generate(batch["input_ids"])
assert tok.batch_decode(generated_ids, skip_special_tokens=True) == [
"UN Chief Says There Is No Plan to Stop Chemical Weapons in Syria"
]
```
## Resources
BART ãå§ããã®ã«åœ¹ç«ã€å
¬åŒ Hugging Face ããã³ã³ãã¥ãã㣠(ð ã§ç€ºãããŠãã) ãªãœãŒã¹ã®ãªã¹ããããã«å«ãããªãœãŒã¹ã®éä¿¡ã«èå³ãããå Žåã¯ããæ°è»œã«ãã« ãªã¯ãšã¹ããéããŠãã ããã審æ»ãããŠããã ããŸãããªãœãŒã¹ã¯ãæ¢åã®ãªãœãŒã¹ãè€è£œããã®ã§ã¯ãªããäœãæ°ãããã®ã瀺ãããšãçæ³çã§ãã
<PipelineTag pipeline="summarization"/>
- ã«é¢ããããã°æçš¿ [忣ãã¬ãŒãã³ã°: ð€ Transformers ãš Amazon SageMaker ã䜿çšããèŠçŽã®ããã® BART/T5 ã®ãã¬ãŒãã³ã°](https://huggingface.co/blog/sagemaker-distributed-training-seq2seq)ã
- æ¹æ³ã«é¢ããããŒããã㯠[blurr ã䜿çšã㊠fastai ã§èŠçŽããããã« BART ã埮調æŽãã](https://colab.research.google.com/github/ohmeow/ohmeow_website/blob/master/posts/2021-05-25-mbart-sequence-classification-with-blurr.ipynb). ð ð
- æ¹æ³ã«é¢ããããŒããã㯠[ãã¬ãŒã㌠ã¯ã©ã¹ã䜿çšã㊠2 ã€ã®èšèªã§èŠçŽããããã« BART ã埮調æŽãã](https://colab.research.google.com/github/elsanns/xai-nlp-notebooks/blob/master/fine_tune_bart_summarization_two_langs.ipynb)ã ð
- [`BartForConditionalGeneration`] ã¯ããã® [ãµã³ãã« ã¹ã¯ãªãã](https://github.com/huggingface/transformers/tree/main/examples/pytorch/summarization) ããã³ [ããŒãããã¯](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/summarization.ipynb)ã
- [`TFBartForConditionalGeneration`] ã¯ããã® [ãµã³ãã« ã¹ã¯ãªãã](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/summarization) ããã³ [ããŒãããã¯](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/summarization-tf.ipynb)ã
- [`FlaxBartForConditionalGeneration`] ã¯ããã® [ãµã³ãã« ã¹ã¯ãªãã](https://github.com/huggingface/transformers/tree/main/examples/flax/summarization) ã§ãµããŒããããŠããŸãã
- [èŠçŽ](https://huggingface.co/course/chapter7/5?fw=pt#summarization) ð€ ãã°ãã§ã€ã¹ã³ãŒã¹ã®ç« ã
- [èŠçŽã¿ã¹ã¯ã¬ã€ã](../tasks/summarization.md)
<PipelineTag pipeline="fill-mask"/>
- [`BartForConditionalGeneration`] ã¯ããã® [ãµã³ãã« ã¹ã¯ãªãã](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling#robertabertdistilbert-and-masked-language-modeling) ã§ãµããŒããããŠããã [ããŒãããã¯](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb)ã
- [`TFBartForConditionalGeneration`] ã¯ããã® [ãµã³ãã« ã¹ã¯ãªãã](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/language-modeling#run_mlmpy) ããã³ [ããŒãããã¯](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb)ã
- [`FlaxBartForConditionalGeneration`] ã¯ããã® [ãµã³ãã« ã¹ã¯ãªãã](https://github.com/huggingface/transformers/tree/main/examples/flax/language-modeling#masked-language-modeling) ããã³ [ããŒãããã¯]( https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/masked_language_modeling_flax.ipynb)ã
- [ãã¹ã¯ãããèšèªã¢ããªã³ã°](https://huggingface.co/course/chapter7/3?fw=pt) ð€ é¡ãã° ã³ãŒã¹ã®ç« ã
- [ãã¹ã¯ãããèšèªã¢ããªã³ã° ã¿ã¹ã¯ ã¬ã€ã](../tasks/masked_lang_modeling)
<PipelineTag pipeline="translation"/>
- [ãã³ãã£ãŒèªããè±èªãžã®ç¿»èš³ã« Seq2SeqTrainer ã䜿çšã㊠mBART ã埮調æŽãã]æ¹æ³ã«é¢ããããŒã (https://colab.research.google.com/github/vasudevgupta7/huggingface-tutorials/blob/main/translation_training.ipynb)ã ð
- [`BartForConditionalGeneration`] ã¯ããã® [ãµã³ãã« ã¹ã¯ãªãã](https://github.com/huggingface/transformers/tree/main/examples/pytorch/translation) ããã³ [ããŒãããã¯](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/translation.ipynb)ã
- [`TFBartForConditionalGeneration`] ã¯ããã® [ãµã³ãã« ã¹ã¯ãªãã](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/translation) ããã³ [ããŒãããã¯](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/translation-tf.ipynb)ã
- [翻蚳ã¿ã¹ã¯ã¬ã€ã](../tasks/translation)
以äžãåç
§ããŠãã ããã
- [ããã¹ãåé¡ã¿ã¹ã¯ã¬ã€ã](../tasks/sequence_classification)
- [質ååçã¿ã¹ã¯ ã¬ã€ã](../tasks/question_answering)
- [å æèšèªã¢ããªã³ã° ã¿ã¹ã¯ ã¬ã€ã](../tasks/language_modeling)
- [æœåºããããã§ãã¯ãã€ã³ã](https://huggingface.co/models?search=distilbart) ã¯ããã® [è«æ](https://arxiv.org/abs/2010.13002) ã§èª¬æãããŠããŸãã
## BartConfig
[[autodoc]] BartConfig
- all
## BartTokenizer
[[autodoc]] BartTokenizer
- all
## BartTokenizerFast
[[autodoc]] BartTokenizerFast
- all
## BartModel
[[autodoc]] BartModel
- forward
## BartForConditionalGeneration
[[autodoc]] BartForConditionalGeneration
- forward
## BartForSequenceClassification
[[autodoc]] BartForSequenceClassification
- forward
## BartForQuestionAnswering
[[autodoc]] BartForQuestionAnswering
- forward
## BartForCausalLM
[[autodoc]] BartForCausalLM
- forward
## TFBartModel
[[autodoc]] TFBartModel
- call
## TFBartForConditionalGeneration
[[autodoc]] TFBartForConditionalGeneration
- call
## TFBartForSequenceClassification
[[autodoc]] TFBartForSequenceClassification
- call
## FlaxBartModel
[[autodoc]] FlaxBartModel
- __call__
- encode
- decode
## FlaxBartForConditionalGeneration
[[autodoc]] FlaxBartForConditionalGeneration
- __call__
- encode
- decode
## FlaxBartForSequenceClassification
[[autodoc]] FlaxBartForSequenceClassification
- __call__
- encode
- decode
## FlaxBartForQuestionAnswering
[[autodoc]] FlaxBartForQuestionAnswering
- __call__
- encode
- decode
## FlaxBartForCausalLM
[[autodoc]] FlaxBartForCausalLM
- __call__
| 0 |
hf_public_repos/transformers/docs/source/ja | hf_public_repos/transformers/docs/source/ja/model_doc/cpm.md | <!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# CPM
## Overview
CPM ã¢ãã«ã¯ãZhengyan ZhangãXu HanãHao ZhouãPei KeãYuxian Gu ã«ãã£ãŠ [CPM: A Large-scale Generative Chinese Pre-trained Language Model](https://arxiv.org/abs/2012.00413) ã§ææ¡ãããŸãããè埳æãç§Šè£äœ³ã
Yusheng SuãHaozhe JiãJian GuanãFanchao QiãXiaozi WangãYanan ZhengãGuoyang ZengãHuanqi CaoãShengqi Chenã
Daixuan LiãZhenbo SunãZhiyuan LiuãMinlie HuangãWentao HanãJie TangãJuanzi LiãXiaoyan ZhuãMaosong Sunã
è«æã®èŠçŽã¯æ¬¡ã®ãšããã§ãã
*äºåãã¬ãŒãã³ã°ãããèšèªã¢ãã« (PLM) ã¯ãããŸããŸãªäžæµã® NLP ã¿ã¹ã¯ã«æçã§ããããšã蚌æãããŠããŸããæè¿ã§ã¯GPT-3ã
1,750ååã®ãã©ã¡ãŒã¿ãš570GBã®åŠç¿ããŒã¿ãåããæ°åã®æ®åœ±ïŒ1æã§ãïŒã®å®¹éã§å€§ããªæ³šç®ãéããŸãã
ãŒãã·ã§ããïŒåŠç¿ããã ããGPT-3 ãé©çšããŠäžåœèªã® NLP ã¿ã¹ã¯ã«å¯ŸåŠããããšã¯äŸç¶ãšããŠå°é£ã§ãã
GPT-3 ã®èšèªã¯äž»ã«è±èªã§ããããã©ã¡ãŒã¿ãŒã¯å
¬éãããŠããŸããããã®æè¡ã¬ããŒãã§ã¯ã
å€§èŠæš¡ãªäžåœèªãã¬ãŒãã³ã° ããŒã¿ã«å¯Ÿããçæçäºåãã¬ãŒãã³ã°ãåããäžåœèªäºåãã¬ãŒãã³ã°æžã¿èšèªã¢ãã« (CPM)ãæé«ã«
ç§ãã¡ã®ç¥èã®éãã§ã¯ã26 åã®ãã©ã¡ãŒã¿ãš 100GB ã®äžåœèªãã¬ãŒãã³ã° ããŒã¿ãåãã CPM ã¯ãäºåãã¬ãŒãã³ã°ãããäžåœèªãšããŠã¯æå€§ã®ãã®ã§ãã
èšèªã¢ãã«ã¯ãäŒè©±ããšãã»ã€ã®äœæã
ã¯ããŒãŒãã¹ããšèšèªçè§£ãåºç¯ãªå®éšã«ãããCPM ãå€ãã®ç°å¢ã§åªããããã©ãŒãã³ã¹ãéæã§ããããšãå®èšŒãããŠããŸãã
å°æ°ã·ã§ãã (ãŒãã·ã§ããã§ã) åŠç¿ã®èšå®ã§ã® NLP ã¿ã¹ã¯ã*
ãã®ã¢ãã«ã¯ [canwenxu](https://huggingface.co/canwenxu) ã«ãã£ãŠæäŸãããŸããããªãªãžãã«ã®å®è£
ãèŠã€ãããŸã
ãã: https://github.com/TsinghuaAI/CPM-Generate
<Tip>
CPM ã®ã¢ãŒããã¯ãã£ã¯ãããŒã¯ã³åæ¹æ³ãé€ã㊠GPT-2 ãšåãã§ãã詳现ã«ã€ããŠã¯ã[GPT-2 ããã¥ã¡ã³ã](gpt2) ãåç
§ããŠãã ããã
API ãªãã¡ã¬ã³ã¹æ
å ±ã
</Tip>
## CpmTokenizer
[[autodoc]] CpmTokenizer
## CpmTokenizerFast
[[autodoc]] CpmTokenizerFast
| 0 |
hf_public_repos/transformers/docs/source/ja | hf_public_repos/transformers/docs/source/ja/internal/modeling_utils.md | <!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# ã«ã¹ã¿ã ã¬ã€ã€ãŒãšãŠãŒãã£ãªãã£
ãã®ããŒãžã«ã¯ãã©ã€ãã©ãªã§äœ¿çšããããã¹ãŠã®ã«ã¹ã¿ã ã¬ã€ã€ãŒãšãã¢ããªã³ã°ã«æäŸããããŠãŒãã£ãªãã£é¢æ°ããªã¹ããããŸãã
ãããã®ã»ãšãã©ã¯ãã©ã€ãã©ãªå
ã®ã¢ãã«ã®ã³ãŒããç ç©¶ããå Žåã«ã®ã¿åœ¹ã«ç«ã¡ãŸãã
## Pytorch custom modules
[[autodoc]] pytorch_utils.Conv1D
[[autodoc]] modeling_utils.PoolerStartLogits
- forward
[[autodoc]] modeling_utils.PoolerEndLogits
- forward
[[autodoc]] modeling_utils.PoolerAnswerClass
- forward
[[autodoc]] modeling_utils.SquadHeadOutput
[[autodoc]] modeling_utils.SQuADHead
- forward
[[autodoc]] modeling_utils.SequenceSummary
- forward
## PyTorch Helper Functions
[[autodoc]] pytorch_utils.apply_chunking_to_forward
[[autodoc]] pytorch_utils.find_pruneable_heads_and_indices
[[autodoc]] pytorch_utils.prune_layer
[[autodoc]] pytorch_utils.prune_conv1d_layer
[[autodoc]] pytorch_utils.prune_linear_layer
## TensorFlow custom layers
[[autodoc]] modeling_tf_utils.TFConv1D
[[autodoc]] modeling_tf_utils.TFSequenceSummary
## TensorFlow loss functions
[[autodoc]] modeling_tf_utils.TFCausalLanguageModelingLoss
[[autodoc]] modeling_tf_utils.TFMaskedLanguageModelingLoss
[[autodoc]] modeling_tf_utils.TFMultipleChoiceLoss
[[autodoc]] modeling_tf_utils.TFQuestionAnsweringLoss
[[autodoc]] modeling_tf_utils.TFSequenceClassificationLoss
[[autodoc]] modeling_tf_utils.TFTokenClassificationLoss
## TensorFlow Helper Functions
[[autodoc]] modeling_tf_utils.get_initializer
[[autodoc]] modeling_tf_utils.keras_serializable
[[autodoc]] modeling_tf_utils.shape_list
| 0 |
hf_public_repos/transformers/docs/source/ja | hf_public_repos/transformers/docs/source/ja/internal/file_utils.md | <!--Copyright 2021 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# äžè¬çãªãŠãŒãã£ãªãã£
ãã®ããŒãžã«ã¯ããã¡ã€ã« `utils.py` ã«ãã Transformers ã®äžè¬çãªãŠãŒãã£ãªãã£é¢æ°ããã¹ãŠãªã¹ããããŠããŸãã
ãããã®ã»ãšãã©ã¯ãã©ã€ãã©ãªã§äžè¬çãªã³ãŒããåŠç¿ããå Žåã«ã®ã¿åœ¹ã«ç«ã¡ãŸãã
## åæåãšååä»ãã¿ãã«
[[autodoc]] utils.ExplicitEnum
[[autodoc]] utils.PaddingStrategy
[[autodoc]] utils.TensorType
## ç¹å¥ãªãã³ã¬ãŒã¿ãŒ
[[autodoc]] utils.add_start_docstrings
[[autodoc]] utils.add_start_docstrings_to_model_forward
[[autodoc]] utils.add_end_docstrings
[[autodoc]] utils.add_code_sample_docstrings
[[autodoc]] utils.replace_return_docstrings
## ç¹æ®ãªããããã£
[[autodoc]] utils.cached_property
## ãã®ä»ã®ãŠãŒãã£ãªãã£
[[autodoc]] utils._LazyModule
| 0 |
hf_public_repos/transformers/docs/source/ja | hf_public_repos/transformers/docs/source/ja/internal/audio_utils.md | <!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# `FeatureExtractor` çšã®ãŠãŒãã£ãªãã£
ãã®ããŒãžã«ã¯ã*çæéããŒãªãšå€æ* ã *ãã° ã¡ã« ã¹ãã¯ããã°ã©ã * ãªã©ã®äžè¬çãªã¢ã«ãŽãªãºã ã䜿çšããŠçã®ãªãŒãã£ãªããç¹å¥ãªç¹åŸŽãèšç®ããããã«ããªãŒãã£ãª [`FeatureExtractor`] ã§äœ¿çšã§ãããã¹ãŠã®ãŠãŒãã£ãªãã£é¢æ°ããªã¹ããããŠããŸãã
ãããã®ã»ãšãã©ã¯ãã©ã€ãã©ãªå
ã®ãªãŒãã£ãª ããã»ããµã®ã³ãŒããåŠç¿ããå Žåã«ã®ã¿åœ¹ã«ç«ã¡ãŸãã
## ãªãŒãã£ãªå€æ
[[autodoc]] audio_utils.hertz_to_mel
[[autodoc]] audio_utils.mel_to_hertz
[[autodoc]] audio_utils.mel_filter_bank
[[autodoc]] audio_utils.optimal_fft_length
[[autodoc]] audio_utils.window_function
[[autodoc]] audio_utils.spectrogram
[[autodoc]] audio_utils.power_to_db
[[autodoc]] audio_utils.amplitude_to_db
| 0 |
hf_public_repos/transformers/docs/source/ja | hf_public_repos/transformers/docs/source/ja/internal/generation_utils.md | <!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# çºé»çšãŠãŒãã£ãªãã£
ãã®ããŒãžã«ã¯ã[`~generation.GenerationMixin.generate`] ã§äœ¿çšããããã¹ãŠã®ãŠãŒãã£ãªãã£é¢æ°ããªã¹ããããŠããŸãã
[`~generation.GenerationMixin.greedy_search`],
[`~generation.GenerationMixin.contrastive_search`],
[`~generation.GenerationMixin.sample`],
[`~generation.GenerationMixin.beam_search`],
[`~generation.GenerationMixin.beam_sample`],
[`~generation.GenerationMixin.group_beam_search`]ãããã³
[`~generation.GenerationMixin.constrained_beam_search`]ã
ãããã®ã»ãšãã©ã¯ãã©ã€ãã©ãªå
ã®çæã¡ãœããã®ã³ãŒããåŠç¿ããå Žåã«ã®ã¿åœ¹ã«ç«ã¡ãŸãã
## åºåãçæãã
[`~generation.GenerationMixin.generate`] ã®åºåã¯ã次ã®ãµãã¯ã©ã¹ã®ã€ã³ã¹ã¿ã³ã¹ã§ãã
[`~utils.ModelOutput`]ããã®åºåã¯ãè¿ããããã¹ãŠã®æ
å ±ãå«ãããŒã¿æ§é ã§ãã
[`~generation.GenerationMixin.generate`] ã«ãã£ãŠäœæãããŸãããã¿ãã«ãŸãã¯èŸæžãšããŠã䜿çšã§ããŸãã
以äžã«äŸã瀺ããŸãã
```python
from transformers import GPT2Tokenizer, GPT2LMHeadModel
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
model = GPT2LMHeadModel.from_pretrained("gpt2")
inputs = tokenizer("Hello, my dog is cute and ", return_tensors="pt")
generation_output = model.generate(**inputs, return_dict_in_generate=True, output_scores=True)
```
`generation_output` ãªããžã§ã¯ãã¯ãã§ããéã [`~generation.GreedySearchDecoderOnlyOutput`] ã§ãã
以äžã®ãã®ã¯ã©ã¹ã®ããã¥ã¡ã³ããåç
§ããŠãã ãããããã¯ã次ã®å±æ§ãããããšãæå³ããŸãã
- `sequences`: çæãããããŒã¯ã³ã®ã·ãŒã±ã³ã¹
- `scores` (ãªãã·ã§ã³): åçæã¹ãããã®èšèªã¢ããªã³ã° ãããã®äºæž¬ã¹ã³ã¢
- `hidden_ââstates` (ãªãã·ã§ã³): çæã¹ãããããšã®ã¢ãã«ã®é ããç¶æ
- `attentions` (ãªãã·ã§ã³): çæã¹ãããããšã®ã¢ãã«ã®ã¢ãã³ã·ã§ã³ã®éã¿
ããã§ã¯ã`output_scores=True`ãæž¡ããã®ã§ `scores` ããããŸããã`hidden_ââstates` ã¯ãããŸããã
`attentions` ã¯ã`output_hidden_ââstates=True`ãŸãã¯`output_attentions=True`ãæž¡ããªãã£ãããã§ãã
éåžžãšåãããã«å屿§ã«ã¢ã¯ã»ã¹ã§ããŸãããã®å±æ§ãã¢ãã«ããè¿ãããªãã£ãå Žåã¯ã
ã¯ããªãããååŸããŸããããã§ãããšãã°`generation_output.scores`ã¯ãçæããããã¹ãŠã®äºæž¬ã¹ã³ã¢ã§ãã
èšèªã¢ããªã³ã°ã®ãããã§ããã`generation_output.attentions`ã¯`None`ã§ãã
`generation_output` ãªããžã§ã¯ããã¿ãã«ãšããŠäœ¿çšããå Žåã`None` å€ãæããªã屿§ã®ã¿ãä¿æãããŸãã
ããšãã°ãããã«ã¯ 2 ã€ã®èŠçŽ ã`loss`ãæ¬¡ã«`logits`ããããŸãã
```python
generation_output[:2]
```
ããšãã°ãã¿ãã« `(generation_output.sequences,generation_output.scores)` ãè¿ããŸãã
`generation_output` ãªããžã§ã¯ããèŸæžãšããŠäœ¿çšããå Žåã`None` ãæããªã屿§ã®ã¿ãä¿æãããŸãã
ããã§ã¯ãããšãã°ã`sequences`ãš`scores`ãšãã 2 ã€ã®ããŒããããŸãã
ããã§ã¯ãã¹ãŠã®åºåã¿ã€ããææžåããŸãã
### PyTorch
[[autodoc]] generation.GreedySearchEncoderDecoderOutput
[[autodoc]] generation.GreedySearchDecoderOnlyOutput
[[autodoc]] generation.SampleEncoderDecoderOutput
[[autodoc]] generation.SampleDecoderOnlyOutput
[[autodoc]] generation.BeamSearchEncoderDecoderOutput
[[autodoc]] generation.BeamSearchDecoderOnlyOutput
[[autodoc]] generation.BeamSampleEncoderDecoderOutput
[[autodoc]] generation.BeamSampleDecoderOnlyOutput
[[autodoc]] generation.ContrastiveSearchEncoderDecoderOutput
[[autodoc]] generation.ContrastiveSearchDecoderOnlyOutput
### TensorFlow
[[autodoc]] generation.TFGreedySearchEncoderDecoderOutput
[[autodoc]] generation.TFGreedySearchDecoderOnlyOutput
[[autodoc]] generation.TFSampleEncoderDecoderOutput
[[autodoc]] generation.TFSampleDecoderOnlyOutput
[[autodoc]] generation.TFBeamSearchEncoderDecoderOutput
[[autodoc]] generation.TFBeamSearchDecoderOnlyOutput
[[autodoc]] generation.TFBeamSampleEncoderDecoderOutput
[[autodoc]] generation.TFBeamSampleDecoderOnlyOutput
[[autodoc]] generation.TFContrastiveSearchEncoderDecoderOutput
[[autodoc]] generation.TFContrastiveSearchDecoderOnlyOutput
### FLAX
[[autodoc]] generation.FlaxSampleOutput
[[autodoc]] generation.FlaxGreedySearchOutput
[[autodoc]] generation.FlaxBeamSearchOutput
## LogitsProcessor
[`LogitsProcessor`] ã䜿çšããŠãèšèªã¢ãã«ã®ãããã®äºæž¬ã¹ã³ã¢ã倿Žã§ããŸãã
äžä»£ã
### PyTorch
[[autodoc]] AlternatingCodebooksLogitsProcessor
- __call__
[[autodoc]] ClassifierFreeGuidanceLogitsProcessor
- __call__
[[autodoc]] EncoderNoRepeatNGramLogitsProcessor
- __call__
[[autodoc]] EncoderRepetitionPenaltyLogitsProcessor
- __call__
[[autodoc]] EpsilonLogitsWarper
- __call__
[[autodoc]] EtaLogitsWarper
- __call__
[[autodoc]] ExponentialDecayLengthPenalty
- __call__
[[autodoc]] ForcedBOSTokenLogitsProcessor
- __call__
[[autodoc]] ForcedEOSTokenLogitsProcessor
- __call__
[[autodoc]] ForceTokensLogitsProcessor
- __call__
[[autodoc]] HammingDiversityLogitsProcessor
- __call__
[[autodoc]] InfNanRemoveLogitsProcessor
- __call__
[[autodoc]] LogitNormalization
- __call__
[[autodoc]] LogitsProcessor
- __call__
[[autodoc]] LogitsProcessorList
- __call__
[[autodoc]] LogitsWarper
- __call__
[[autodoc]] MinLengthLogitsProcessor
- __call__
[[autodoc]] MinNewTokensLengthLogitsProcessor
- __call__
[[autodoc]] NoBadWordsLogitsProcessor
- __call__
[[autodoc]] NoRepeatNGramLogitsProcessor
- __call__
[[autodoc]] PrefixConstrainedLogitsProcessor
- __call__
[[autodoc]] RepetitionPenaltyLogitsProcessor
- __call__
[[autodoc]] SequenceBiasLogitsProcessor
- __call__
[[autodoc]] SuppressTokensAtBeginLogitsProcessor
- __call__
[[autodoc]] SuppressTokensLogitsProcessor
- __call__
[[autodoc]] TemperatureLogitsWarper
- __call__
[[autodoc]] TopKLogitsWarper
- __call__
[[autodoc]] TopPLogitsWarper
- __call__
[[autodoc]] TypicalLogitsWarper
- __call__
[[autodoc]] UnbatchedClassifierFreeGuidanceLogitsProcessor
- __call__
[[autodoc]] WhisperTimeStampLogitsProcessor
- __call__
### TensorFlow
[[autodoc]] TFForcedBOSTokenLogitsProcessor
- __call__
[[autodoc]] TFForcedEOSTokenLogitsProcessor
- __call__
[[autodoc]] TFForceTokensLogitsProcessor
- __call__
[[autodoc]] TFLogitsProcessor
- __call__
[[autodoc]] TFLogitsProcessorList
- __call__
[[autodoc]] TFLogitsWarper
- __call__
[[autodoc]] TFMinLengthLogitsProcessor
- __call__
[[autodoc]] TFNoBadWordsLogitsProcessor
- __call__
[[autodoc]] TFNoRepeatNGramLogitsProcessor
- __call__
[[autodoc]] TFRepetitionPenaltyLogitsProcessor
- __call__
[[autodoc]] TFSuppressTokensAtBeginLogitsProcessor
- __call__
[[autodoc]] TFSuppressTokensLogitsProcessor
- __call__
[[autodoc]] TFTemperatureLogitsWarper
- __call__
[[autodoc]] TFTopKLogitsWarper
- __call__
[[autodoc]] TFTopPLogitsWarper
- __call__
### FLAX
[[autodoc]] FlaxForcedBOSTokenLogitsProcessor
- __call__
[[autodoc]] FlaxForcedEOSTokenLogitsProcessor
- __call__
[[autodoc]] FlaxForceTokensLogitsProcessor
- __call__
[[autodoc]] FlaxLogitsProcessor
- __call__
[[autodoc]] FlaxLogitsProcessorList
- __call__
[[autodoc]] FlaxLogitsWarper
- __call__
[[autodoc]] FlaxMinLengthLogitsProcessor
- __call__
[[autodoc]] FlaxSuppressTokensAtBeginLogitsProcessor
- __call__
[[autodoc]] FlaxSuppressTokensLogitsProcessor
- __call__
[[autodoc]] FlaxTemperatureLogitsWarper
- __call__
[[autodoc]] FlaxTopKLogitsWarper
- __call__
[[autodoc]] FlaxTopPLogitsWarper
- __call__
[[autodoc]] FlaxWhisperTimeStampLogitsProcessor
- __call__
## StoppingCriteria
[`StoppingCriteria`] ã䜿çšããŠã(EOS ããŒã¯ã³ä»¥å€ã®) çæã忢ããã¿ã€ãã³ã°ã倿Žã§ããŸãããã㯠PyTorch å®è£
ã§ã®ã¿å©çšå¯èœã§ããããšã«æ³šæããŠãã ããã
[[autodoc]] StoppingCriteria
- __call__
[[autodoc]] StoppingCriteriaList
- __call__
[[autodoc]] MaxLengthCriteria
- __call__
[[autodoc]] MaxTimeCriteria
- __call__
## Constraints
[`Constraint`] ã䜿çšãããšãçææã«åºåã«ç¹å®ã®ããŒã¯ã³ãŸãã¯ã·ãŒã±ã³ã¹ãå«ãŸããããã«åŒ·å¶ã§ããŸãããã㯠PyTorch å®è£
ã§ã®ã¿å©çšå¯èœã§ããããšã«æ³šæããŠãã ããã
[[autodoc]] Constraint
[[autodoc]] PhrasalConstraint
[[autodoc]] DisjunctiveConstraint
[[autodoc]] ConstraintListState
## BeamSearch
[[autodoc]] BeamScorer
- process
- finalize
[[autodoc]] BeamSearchScorer
- process
- finalize
[[autodoc]] ConstrainedBeamSearchScorer
- process
- finalize
## Utilities
[[autodoc]] top_k_top_p_filtering
[[autodoc]] tf_top_k_top_p_filtering
## Streamers
[[autodoc]] TextStreamer
[[autodoc]] TextIteratorStreamer
| 0 |
hf_public_repos/transformers/docs/source/ja | hf_public_repos/transformers/docs/source/ja/internal/time_series_utils.md | <!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# æç³»åãŠãŒãã£ãªãã£
ãã®ããŒãžã«ã¯ãæç³»åããŒã¹ã®ã¢ãã«ã«äœ¿çšã§ãããã¹ãŠã®ãŠãŒãã£ãªãã£é¢æ°ãšã¯ã©ã¹ããªã¹ããããŸãã
ãããã®ã»ãšãã©ã¯ãæç³»åã¢ãã«ã®ã³ãŒããç ç©¶ããŠããå ŽåããŸãã¯åæ£åºåã¯ã©ã¹ã®ã³ã¬ã¯ã·ã§ã³ã«è¿œå ãããå Žåã«ã®ã¿åœ¹ç«ã¡ãŸãã
## Distributional Output
[[autodoc]] time_series_utils.NormalOutput
[[autodoc]] time_series_utils.StudentTOutput
[[autodoc]] time_series_utils.NegativeBinomialOutput | 0 |
hf_public_repos/transformers/docs/source/ja | hf_public_repos/transformers/docs/source/ja/internal/tokenization_utils.md | <!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Utilities for Tokenizers
ãã®ããŒãžã«ã¯ãããŒã¯ãã€ã¶ãŒã«ãã£ãŠäœ¿çšããããã¹ãŠã®ãŠãŒãã£ãªãã£é¢æ° (äž»ã«ã¯ã©ã¹) ããªã¹ããããŸãã
[`~tokenization_utils_base.PreTrainedTokenizerBase`] éã®å
±éã¡ãœãããå®è£
ããŸãã
[`PreTrainedTokenizer`] ãš [`PreTrainedTokenizerFast`] ããã³ããã¯ã¹ã€ã³
[`~tokenization_utils_base.SpecialTokensMixin`]ã
ãããã®ã»ãšãã©ã¯ãã©ã€ãã©ãªå
ã®ããŒã¯ãã€ã¶ãŒã®ã³ãŒããåŠç¿ããå Žåã«ã®ã¿åœ¹ã«ç«ã¡ãŸãã
## PreTrainedTokenizerBase
[[autodoc]] tokenization_utils_base.PreTrainedTokenizerBase
- __call__
- all
## SpecialTokensMixin
[[autodoc]] tokenization_utils_base.SpecialTokensMixin
## Enums and namedtuples
[[autodoc]] tokenization_utils_base.TruncationStrategy
[[autodoc]] tokenization_utils_base.CharSpan
[[autodoc]] tokenization_utils_base.TokenSpan
| 0 |
hf_public_repos/transformers/docs/source/ja | hf_public_repos/transformers/docs/source/ja/internal/image_processing_utils.md | !--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# ç»åããã»ããµçšãŠãŒãã£ãªãã£
ãã®ããŒãžã«ã¯ãç»åããã»ããµãŒã§äœ¿çšããããã¹ãŠã®ãŠãŒãã£ãªãã£ãŒé¢æ°ããªã¹ããããŠããŸããäž»ã«æ©èœçãªãã®ã§ãã
ç»åãåŠçããããã«äœ¿çšããã倿ã
ãããã®ã»ãšãã©ã¯ãã©ã€ãã©ãªå
ã®ç»åããã»ããµã®ã³ãŒããåŠç¿ããå Žåã«ã®ã¿åœ¹ã«ç«ã¡ãŸãã
## Image Transformations
[[autodoc]] image_transforms.center_crop
[[autodoc]] image_transforms.center_to_corners_format
[[autodoc]] image_transforms.corners_to_center_format
[[autodoc]] image_transforms.id_to_rgb
[[autodoc]] image_transforms.normalize
[[autodoc]] image_transforms.pad
[[autodoc]] image_transforms.rgb_to_id
[[autodoc]] image_transforms.rescale
[[autodoc]] image_transforms.resize
[[autodoc]] image_transforms.to_pil_image
## ImageProcessingMixin
[[autodoc]] image_processing_utils.ImageProcessingMixin
| 0 |
hf_public_repos/transformers/docs/source/ja | hf_public_repos/transformers/docs/source/ja/internal/trainer_utils.md | <!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# ãã¬ãŒããŒçšãŠãŒãã£ãªãã£
ãã®ããŒãžã«ã¯ã[`Trainer`] ã§äœ¿çšããããã¹ãŠã®ãŠãŒãã£ãªãã£é¢æ°ããªã¹ããããŠããŸãã
ãããã®ã»ãšãã©ã¯ãã©ã€ãã©ãªå
ã®ãã¬ãŒããŒã®ã³ãŒããåŠç¿ããå Žåã«ã®ã¿åœ¹ã«ç«ã¡ãŸãã
## Utilities
[[autodoc]] EvalPrediction
[[autodoc]] IntervalStrategy
[[autodoc]] enable_full_determinism
[[autodoc]] set_seed
[[autodoc]] torch_distributed_zero_first
## Callbacks internals
[[autodoc]] trainer_callback.CallbackHandler
## Distributed Evaluation
[[autodoc]] trainer_pt_utils.DistributedTensorGatherer
## Distributed Evaluation
[[autodoc]] HfArgumentParser
## Debug Utilities
[[autodoc]] debug_utils.DebugUnderflowOverflow
| 0 |
hf_public_repos/transformers/docs/source/ja | hf_public_repos/transformers/docs/source/ja/internal/pipelines_utils.md | <!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# ãã€ãã©ã€ã³çšã®ãŠãŒãã£ãªãã£
ãã®ããŒãžã«ã¯ãã©ã€ãã©ãªããã€ãã©ã€ã³ã«æäŸãããã¹ãŠã®ãŠãŒãã£ãªãã£é¢æ°ããªã¹ããããŸãã
ãããã®ã»ãšãã©ã¯ãã©ã€ãã©ãªå
ã®ã¢ãã«ã®ã³ãŒããç ç©¶ããå Žåã«ã®ã¿åœ¹ã«ç«ã¡ãŸãã
## Argument handling
[[autodoc]] pipelines.ArgumentHandler
[[autodoc]] pipelines.ZeroShotClassificationArgumentHandler
[[autodoc]] pipelines.QuestionAnsweringArgumentHandler
## Data format
[[autodoc]] pipelines.PipelineDataFormat
[[autodoc]] pipelines.CsvPipelineDataFormat
[[autodoc]] pipelines.JsonPipelineDataFormat
[[autodoc]] pipelines.PipedPipelineDataFormat
## Utilities
[[autodoc]] pipelines.PipelineException
| 0 |
hf_public_repos/transformers/docs/source/ja | hf_public_repos/transformers/docs/source/ja/main_classes/agent.md | <!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# ãšãŒãžã§ã³ããšããŒã«
<Tip warning={true}>
Transformers Agents ã¯å®éšç㪠API ã§ããããã€ã§ã倿Žãããå¯èœæ§ããããŸãããšãŒãžã§ã³ãããè¿ãããçµæ
API ãŸãã¯åºç€ãšãªãã¢ãã«ã¯å€æŽãããåŸåãããããã倿Žãããå¯èœæ§ããããŸãã
</Tip>
ãšãŒãžã§ã³ããšããŒã«ã®è©³çްã«ã€ããŠã¯ã[å
¥éã¬ã€ã](../transformers_agents) ãå¿
ããèªã¿ãã ããããã®ããŒãž
åºç€ãšãªãã¯ã©ã¹ã® API ããã¥ã¡ã³ããå«ãŸããŠããŸãã
## ãšãŒãžã§ã³ã
ç§ãã¡ã¯ 3 çš®é¡ã®ãšãŒãžã§ã³ããæäŸããŸãã[`HfAgent`] ã¯ãªãŒãã³ãœãŒã¹ ã¢ãã«ã®æšè«ãšã³ããã€ã³ãã䜿çšãã[`LocalAgent`] ã¯éžæããã¢ãã«ãããŒã«ã«ã§äœ¿çšãã[`OpenAiAgent`] 㯠OpenAI ã¯ããŒãºã ã¢ãã«ã䜿çšããŸãã
### HfAgent
[[autodoc]] HfAgent
### LocalAgent
[[autodoc]] LocalAgent
### OpenAiAgent
[[autodoc]] OpenAiAgent
### AzureOpenAiAgent
[[autodoc]] AzureOpenAiAgent
### Agent
[[autodoc]] Agent
- chat
- run
- prepare_for_new_chat
## Tools
### load_tool
[[autodoc]] load_tool
### Tool
[[autodoc]] Tool
### PipelineTool
[[autodoc]] PipelineTool
### RemoteTool
[[autodoc]] RemoteTool
### launch_gradio_demo
[[autodoc]] launch_gradio_demo
## ãšãŒãžã§ã³ãã®çš®é¡
ãšãŒãžã§ã³ãã¯ããŒã«éã§ããããçš®é¡ã®ãªããžã§ã¯ããåŠçã§ããŸããããŒã«ã¯å®å
šã«ãã«ãã¢ãŒãã«ã§ãããããåãåããšè¿åãå¯èœã§ã
ããã¹ããç»åããªãŒãã£ãªããããªãªã©ã®ã¿ã€ããããŒã«éã®äºææ§ãé«ããããã ãã§ãªãã
ãããã®æ»ãå€ã ipython (jupyterãcolabãipython ããŒãããã¯ãªã©) ã§æ£ããã¬ã³ããªã³ã°ããã«ã¯ãã©ãã㌠ã¯ã©ã¹ãå®è£
ããŸãã
ãã®ã¿ã€ãã®åšãã
ã©ããããããªããžã§ã¯ãã¯æåãšåãããã«åäœãç¶ããã¯ãã§ããããã¹ããªããžã§ã¯ãã¯äŸç¶ãšããŠæååãŸãã¯ç»åãšããŠåäœããå¿
èŠããããŸã
ãªããžã§ã¯ãã¯äŸç¶ãšã㊠`PIL.Image` ãšããŠåäœããã¯ãã§ãã
ãããã®ã¿ã€ãã«ã¯ã次㮠3 ã€ã®ç¹å®ã®ç®çããããŸãã
- åã«å¯Ÿã㊠`to_raw` ãåŒã³åºããšãåºã«ãªããªããžã§ã¯ããè¿ãããã¯ãã§ã
- åã«å¯Ÿã㊠`to_string` ãåŒã³åºããšããªããžã§ã¯ããæååãšããŠè¿ãå¿
èŠããããŸãã`AgentText` ã®å Žåã¯æååã«ãªãå¯èœæ§ããããŸãã
ãã ããä»ã®ã€ã³ã¹ã¿ã³ã¹ã®ãªããžã§ã¯ãã®ã·ãªã¢ã«åãããããŒãžã§ã³ã®ãã¹ã«ãªããŸãã
- ipython ã«ãŒãã«ã§è¡šç€ºãããšããªããžã§ã¯ããæ£ãã衚瀺ãããã¯ãã§ã
### AgentText
[[autodoc]] transformers.tools.agent_types.AgentText
### AgentImage
[[autodoc]] transformers.tools.agent_types.AgentImage
### AgentAudio
[[autodoc]] transformers.tools.agent_types.AgentAudio
| 0 |
hf_public_repos/transformers/docs/source/ja | hf_public_repos/transformers/docs/source/ja/main_classes/feature_extractor.md | <!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Feature Extractor
ãã£ãŒãã£ãŒãšã¯ã¹ãã©ã¯ã¿ã¯ããªãŒãã£ãªãŸãã¯ããžã§ã³ã¢ãã«ã®ããã®å
¥åãã£ãŒãã£ãŒã®æºåãæ
åœããŠããŸããããã«ã¯ãã·ãŒã±ã³ã¹ããã®ãã£ãŒãã£ãŒæœåºïŒäŸïŒãªãŒãã£ãªãã¡ã€ã«ã®ååŠçããLog-Melã¹ãã¯ããã°ã©ã ãã£ãŒãã£ãŒãžã®å€æïŒãç»åããã®ãã£ãŒãã£ãŒæœåºïŒäŸïŒç»åãã¡ã€ã«ã®ã¯ãããã³ã°ïŒããŸãããã£ã³ã°ãæ£èŠåããããŠNumpyãPyTorchãTensorFlowãã³ãœã«ãžã®å€æãå«ãŸããŸãã
## FeatureExtractionMixin
[[autodoc]] feature_extraction_utils.FeatureExtractionMixin
- from_pretrained
- save_pretrained
## SequenceFeatureExtractor
[[autodoc]] SequenceFeatureExtractor
- pad
## BatchFeature
[[autodoc]] BatchFeature
## ImageFeatureExtractionMixin
[[autodoc]] image_utils.ImageFeatureExtractionMixin
| 0 |
hf_public_repos/transformers/docs/source/ja | hf_public_repos/transformers/docs/source/ja/main_classes/text_generation.md | <!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Generation
åãã¬ãŒã ã¯ãŒã¯ã«ã¯ãããããã® `GenerationMixin` ã¯ã©ã¹ã«å®è£
ãããããã¹ãçæã®ããã® Generate ã¡ãœããããããŸãã
- PyTorch [`~generation.GenerationMixin.generate`] 㯠[`~generation.GenerationMixin`] ã«å®è£
ãããŠããŸãã
- TensorFlow [`~generation.TFGenerationMixin.generate`] 㯠[`~generation.TFGenerationMixin`] ã«å®è£
ãããŠããŸãã
- Flax/JAX [`~generation.FlaxGenerationMixin.generate`] 㯠[`~generation.FlaxGenerationMixin`] ã«å®è£
ãããŠããŸãã
éžæãããã¬ãŒã ã¯ãŒã¯ã«é¢ä¿ãªãã[`~generation.GenerationConfig`] ã䜿çšããŠçæã¡ãœããããã©ã¡ãŒã¿åã§ããŸãã
ã¯ã©ã¹ã€ã³ã¹ã¿ã³ã¹ãåäœãå¶åŸ¡ããçæãã©ã¡ãŒã¿ã®å®å
šãªãªã¹ãã«ã€ããŠã¯ããã®ã¯ã©ã¹ãåç
§ããŠãã ããã
çææ¹æ³ã®ããšã
ã¢ãã«ã®çææ§æãæ€æ»ããæ¹æ³ãããã©ã«ããšã¯äœãããã©ã¡ãŒã¿ãŒãã¢ãããã¯ã«å€æŽããæ¹æ³ãåŠç¿ããã«ã¯ã
ã«ã¹ã¿ãã€ãºãããçææ§æãäœæããŠä¿åããæ¹æ³ã«ã€ããŠã¯ãã
[ããã¹ãçææŠç¥ã¬ã€ã](../generation_strategies)ããã®ã¬ã€ãã§ã¯ãé¢é£æ©èœã®äœ¿ç𿹿³ã«ã€ããŠã説æããŠããŸãã
ããŒã¯ã³ã¹ããªãŒãã³ã°ã®ãããªã
## GenerationConfig
[[autodoc]] generation.GenerationConfig
- from_pretrained
- from_model_config
- save_pretrained
## GenerationMixin
[[autodoc]] generation.GenerationMixin
- generate
- compute_transition_scores
- greedy_search
- sample
- beam_search
- beam_sample
- contrastive_search
- group_beam_search
- constrained_beam_search
## TFGenerationMixin
[[autodoc]] generation.TFGenerationMixin
- generate
- compute_transition_scores
## FlaxGenerationMixin
[[autodoc]] generation.FlaxGenerationMixin
- generate
| 0 |
hf_public_repos/transformers/docs/source/ja | hf_public_repos/transformers/docs/source/ja/main_classes/tokenizer.md | <!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Tokenizer
ããŒã¯ãã€ã¶ãŒã¯ãã¢ãã«ã®å
¥åã®æºåãæ
åœããŸããã©ã€ãã©ãªã«ã¯ããã¹ãŠã®ã¢ãã«ã®ããŒã¯ãã€ã¶ãŒãå«ãŸããŠããŸããã»ãšãã©
ããŒã¯ãã€ã¶ãŒã®äžéšã¯ãå®å
šãª Python å®è£
ãšã
Rust ã©ã€ãã©ãª [ð€ Tokenizers](https://github.com/huggingface/tokenizers)ã ãé«éãå®è£
ã§ã¯æ¬¡ã®ããšãå¯èœã«ãªããŸãã
1. ç¹ã«ãããããŒã¯ã³åãè¡ãå Žåã®å€§å¹
ãªã¹ããŒãã¢ãããš
2. å
ã®æåå (æåãšåèª) ãšããŒã¯ã³ç©ºéã®éã§ãããã³ã°ãã远å ã®ã¡ãœãã (äŸ:
ç¹å®ã®æåãå«ãããŒã¯ã³ã®ã€ã³ããã¯ã¹ããŸãã¯ç¹å®ã®ããŒã¯ã³ã«å¯Ÿå¿ããæåã®ç¯å²ïŒã
åºæ¬ã¯ã©ã¹ [`PreTrainedTokenizer`] ããã³ [`PreTrainedTokenizerFast`]
ã¢ãã«å
¥åã®æååå
¥åããšã³ã³ãŒãã (以äžãåç
§)ãPython ãã€ã³ã¹ã¿ã³ã¹å/ä¿åããããã®äžè¬çãªã¡ãœãããå®è£
ããŸãã
ããŒã«ã« ãã¡ã€ã«ãŸãã¯ãã£ã¬ã¯ããªããŸãã¯ã©ã€ãã©ãªã«ãã£ãŠæäŸãããäºåãã¬ãŒãã³ã°æžã¿ããŒã¯ãã€ã¶ãŒããã®ãé«éãããŒã¯ãã€ã¶ãŒ
(HuggingFace ã® AWS S3 ãªããžããªããããŠã³ããŒã)ãäºäººãšãé Œãã«ããŠããã®ã¯ã
å
±éã¡ãœãããå«ã [`~tokenization_utils_base.PreTrainedTokenizerBase`]
[`~tokenization_utils_base.SpecialTokensMixin`]ã
ãããã£ãŠã[`PreTrainedTokenizer`] ãš [`PreTrainedTokenizerFast`] ã¯ã¡ã€ã³ãå®è£
ããŸãã
ãã¹ãŠã®ããŒã¯ãã€ã¶ãŒã䜿çšããããã®ã¡ãœãã:
- ããŒã¯ã³å (æååããµãã¯ãŒã ããŒã¯ã³æååã«åå²)ãããŒã¯ã³æååã ID ã«å€æãããããã®éã®å€æãè¡ã£ããããŸãã
ãšã³ã³ãŒã/ãã³ãŒã (ã€ãŸããããŒã¯ã³åãšæŽæ°ãžã®å€æ)ã
- åºç€ãšãªãæ§é (BPEãSentencePiece...) ããç¬ç«ããæ¹æ³ã§ãèªåœã«æ°ããããŒã¯ã³ã远å ããŸãã
- ç¹å¥ãªããŒã¯ã³ (ãã¹ã¯ãæã®å§ãŸããªã©) ã®ç®¡ç: ããŒã¯ã³ã®è¿œå ã屿§ãžã®å²ãåœãŠã
ããŒã¯ãã€ã¶ãŒã«ãããç°¡åã«ã¢ã¯ã»ã¹ã§ããããŒã¯ã³åäžã«åå²ãããªãããã«ããããšãã§ããŸãã
[`BatchEncoding`] ã¯ã
[`~tokenization_utils_base.PreTrainedTokenizerBase`] ã®ãšã³ã³ãŒã ã¡ãœãã (`__call__`ã
`encode_plus` ããã³ `batch_encode_plus`) ã§ãããPython èŸæžããæŽŸçããŠããŸããããŒã¯ãã€ã¶ãŒãçŽç²ãª Python ã®å Žå
tokenizer ã®å Žåããã®ã¯ã©ã¹ã¯æšæºã® Python èŸæžãšåãããã«åäœããã«ãã£ãŠèšç®ãããããŸããŸãªã¢ãã«å
¥åãä¿æããŸãã
ãããã®ã¡ãœãã (`input_ids`ã`attention_mask`...)ãããŒã¯ãã€ã¶ãŒããé«éãããŒã¯ãã€ã¶ãŒã§ããå Žå (ã€ãŸãã
HuggingFace [ããŒã¯ãã€ã¶ãŒ ã©ã€ãã©ãª](https://github.com/huggingface/tokenizers))ããã®ã¯ã©ã¹ã¯ããã«æäŸããŸã
å
ã®æåå (æåãšåèª) ãš
ããŒã¯ã³ã¹ããŒã¹ (äŸ: æå®ãããæåãŸãã¯å¯Ÿå¿ããæåã®ç¯å²ãæ§æããããŒã¯ã³ã®ã€ã³ããã¯ã¹ã®ååŸ)
äžããããããŒã¯ã³ã«ïŒã
## PreTrainedTokenizer
[[autodoc]] PreTrainedTokenizer
- __call__
- apply_chat_template
- batch_decode
- decode
- encode
- push_to_hub
- all
## PreTrainedTokenizerFast
[`PreTrainedTokenizerFast`] 㯠[tokenizers](https://huggingface.co/docs/tokenizers) ã©ã€ãã©ãªã«äŸåããŸãã ð€ ããŒã¯ãã€ã¶ãŒ ã©ã€ãã©ãªããååŸããããŒã¯ãã€ã¶ãŒã¯ã
ð€ ãã©ã³ã¹ã«éåžžã«ç°¡åã«ããŒããããŸãããããã©ã®ããã«è¡ãããããçè§£ããã«ã¯ã[ð€ tokenizers ããã® tokenizers ã䜿çšãã](../fast_tokenizers) ããŒãžãåç
§ããŠãã ããã
[[autodoc]] PreTrainedTokenizerFast
- __call__
- apply_chat_template
- batch_decode
- decode
- encode
- push_to_hub
- all
## BatchEncoding
[[autodoc]] BatchEncoding
| 0 |
hf_public_repos/transformers/docs/source/ja | hf_public_repos/transformers/docs/source/ja/main_classes/optimizer_schedules.md | <!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Optimization
`.optimization` ã¢ãžã¥ãŒã«ã¯ä»¥äžãæäŸããŸãã
- ã¢ãã«ã®åŸ®èª¿æŽã«äœ¿çšã§ããéã¿æžè¡°ãä¿®æ£ããããªããã£ãã€ã¶ãŒãããã³
- `_LRSchedule` ããç¶æ¿ããã¹ã±ãžã¥ãŒã« ãªããžã§ã¯ãã®åœ¢åŒã®ããã€ãã®ã¹ã±ãžã¥ãŒã«:
- è€æ°ã®ãããã®åŸé
ã环ç©ããããã®åŸé
环ç©ã¯ã©ã¹
## AdamW (PyTorch)
[[autodoc]] AdamW
## AdaFactor (PyTorch)
[[autodoc]] Adafactor
## AdamWeightDecay (TensorFlow)
[[autodoc]] AdamWeightDecay
[[autodoc]] create_optimizer
## Schedules
### Learning Rate Schedules (Pytorch)
[[autodoc]] SchedulerType
[[autodoc]] get_scheduler
[[autodoc]] get_constant_schedule
[[autodoc]] get_constant_schedule_with_warmup
<img alt="" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/warmup_constant_schedule.png"/>
[[autodoc]] get_cosine_schedule_with_warmup
<img alt="" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/warmup_cosine_schedule.png"/>
[[autodoc]] get_cosine_with_hard_restarts_schedule_with_warmup
<img alt="" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/warmup_cosine_hard_restarts_schedule.png"/>
[[autodoc]] get_linear_schedule_with_warmup
<img alt="" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/warmup_linear_schedule.png"/>
[[autodoc]] get_polynomial_decay_schedule_with_warmup
[[autodoc]] get_inverse_sqrt_schedule
### Warmup (TensorFlow)
[[autodoc]] WarmUp
## Gradient Strategies
### GradientAccumulator (TensorFlow)
[[autodoc]] GradientAccumulator
| 0 |
hf_public_repos/transformers/docs/source/ja | hf_public_repos/transformers/docs/source/ja/main_classes/model.md | <!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Models
ããŒã¹ã¯ã©ã¹ã§ãã [`PreTrainedModel`]ã[`TFPreTrainedModel`]ã[`FlaxPreTrainedModel`] ã¯ãã¢ãã«ã®èªã¿èŸŒã¿ãšä¿åã«é¢ããå
±éã®ã¡ãœãããå®è£
ããŠãããããã¯ããŒã«ã«ã®ãã¡ã€ã«ããã£ã¬ã¯ããªããããŸãã¯ã©ã€ãã©ãªãæäŸããäºååŠç¿ã¢ãã«æ§æïŒHuggingFaceã®AWS S3ãªããžããªããããŠã³ããŒãïŒããã¢ãã«ãèªã¿èŸŒãããã«äœ¿çšã§ããŸãã
[`PreTrainedModel`] ãš [`TFPreTrainedModel`] ã¯ã次ã®å
±éã®ã¡ãœãããå®è£
ããŠããŸãïŒ
- èªåœã«æ°ããããŒã¯ã³ã远å ãããå Žåã«ãå
¥åããŒã¯ã³åã蟌ã¿ã®ãªãµã€ãºãè¡ã
- ã¢ãã«ã®ã¢ãã³ã·ã§ã³ããããåã蟌ã
åã¢ãã«ã«å
±éãããã®ä»ã®ã¡ãœããã¯ã[`~modeling_utils.ModuleUtilsMixin`]ïŒPyTorchã¢ãã«çšïŒããã³[`~modeling_tf_utils.TFModuleUtilsMixin`]ïŒTensorFlowã¢ãã«çšïŒã§å®çŸ©ãããŠãããããã¹ãçæã®å Žåã[`~generation.GenerationMixin`]ïŒPyTorchã¢ãã«çšïŒã[`~generation.TFGenerationMixin`]ïŒTensorFlowã¢ãã«çšïŒãããã³[`~generation.FlaxGenerationMixin`]ïŒFlax/JAXã¢ãã«çšïŒããããŸãã
## PreTrainedModel
[[autodoc]] PreTrainedModel
- push_to_hub
- all
<a id='from_pretrained-torch-dtype'></a>
### å€§èŠæš¡ã¢ãã«ã®èªã¿èŸŒã¿
Transformers 4.20.0ã§ã¯ã[`~PreTrainedModel.from_pretrained`] ã¡ãœãããåèšèšããã[Accelerate](https://huggingface.co/docs/accelerate/big_modeling) ã䜿çšããŠå€§èŠæš¡ã¢ãã«ãæ±ãããšãå¯èœã«ãªããŸãããããã«ã¯ Accelerate >= 0.9.0 ãš PyTorch >= 1.9.0 ãå¿
èŠã§ãã以åã®æ¹æ³ã§ãã«ã¢ãã«ãäœæãããã®åŸäºååŠç¿ã®éã¿ãèªã¿èŸŒã代ããã«ïŒããã«ã¯ã¡ã¢ãªå
ã®ã¢ãã«ãµã€ãºã2åå¿
èŠã§ãã©ã³ãã ã«åæåãããã¢ãã«çšãšéã¿çšã®2ã€ãå¿
èŠã§ããïŒãã¢ãã«ã空ã®å€æ®»ãšããŠäœæããäºååŠç¿ã®éã¿ãèªã¿èŸŒãŸãããšãã«ãã©ã¡ãŒã¿ãŒãå®äœåãããªãã·ã§ã³ã远å ãããŸããã
ãã®ãªãã·ã§ã³ã¯ `low_cpu_mem_usage=True` ã§æå¹ã«ã§ããŸããã¢ãã«ã¯ãŸã空ã®éã¿ãæã€ã¡ã¿ããã€ã¹äžã«äœæããããã®åŸç¶æ
èŸæžãå
éšã«èªã¿èŸŒãŸããŸãïŒã·ã£ãŒãããããã§ãã¯ãã€ã³ãã®å Žåãã·ã£ãŒãããšã«èªã¿èŸŒãŸããŸãïŒããã®æ¹æ³ã§äœ¿çšãããæå€§RAMã¯ãã¢ãã«ã®å®å
šãªãµã€ãºã ãã§ãã
```py
from transformers import AutoModelForSeq2SeqLM
t0pp = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0pp", low_cpu_mem_usage=True)
```
ããã«ãã¢ãã«ãå®å
šã«RAMã«åãŸããªãå ŽåïŒçŸæç¹ã§ã¯æšè«ã®ã¿æå¹ïŒãç°ãªãããã€ã¹ã«ã¢ãã«ãçŽæ¥é
眮ã§ããŸãã`device_map="auto"` ã䜿çšãããšãAccelerateã¯åã¬ã€ã€ãŒãã©ã®ããã€ã¹ã«é
眮ããããæ±ºå®ããæéã®ããã€ã¹ïŒGPUïŒãæå€§éã«æŽ»çšããæ®ãã®éšåãCPUããããã¯GPU RAMãäžè¶³ããŠããå Žåã¯ããŒããã©ã€ãã«ãªãããŒãããŸããã¢ãã«ãè€æ°ã®ããã€ã¹ã«åå²ãããŠããŠããéåžžã©ããå®è¡ãããŸãã
`device_map` ãæž¡ãéã`low_cpu_mem_usage` ã¯èªåçã« `True` ã«èšå®ãããããããããæå®ããå¿
èŠã¯ãããŸããã
```py
from transformers import AutoModelForSeq2SeqLM
t0pp = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0pp", device_map="auto")
```
ã¢ãã«ãããã€ã¹éã§ã©ã®ããã«åå²ããããã¯ããã® `hf_device_map` 屿§ãèŠãããšã§ç¢ºèªã§ããŸã:
```py
t0pp.hf_device_map
```
```python out
{'shared': 0,
'decoder.embed_tokens': 0,
'encoder': 0,
'decoder.block.0': 0,
'decoder.block.1': 1,
'decoder.block.2': 1,
'decoder.block.3': 1,
'decoder.block.4': 1,
'decoder.block.5': 1,
'decoder.block.6': 1,
'decoder.block.7': 1,
'decoder.block.8': 1,
'decoder.block.9': 1,
'decoder.block.10': 1,
'decoder.block.11': 1,
'decoder.block.12': 1,
'decoder.block.13': 1,
'decoder.block.14': 1,
'decoder.block.15': 1,
'decoder.block.16': 1,
'decoder.block.17': 1,
'decoder.block.18': 1,
'decoder.block.19': 1,
'decoder.block.20': 1,
'decoder.block.21': 1,
'decoder.block.22': 'cpu',
'decoder.block.23': 'cpu',
'decoder.final_layer_norm': 'cpu',
'decoder.dropout': 'cpu',
'lm_head': 'cpu'}
```
åããã©ãŒãããã«åŸã£ãŠãç¬èªã®ããã€ã¹ããããäœæããããšãã§ããŸãïŒã¬ã€ã€ãŒåããããã€ã¹ãžã®èŸæžã§ãïŒãã¢ãã«ã®ãã¹ãŠã®ãã©ã¡ãŒã¿ãæå®ãããããã€ã¹ã«ãããããå¿
èŠããããŸããã1ã€ã®ã¬ã€ã€ãŒãå®å
šã«åãããã€ã¹ã«ããå Žåããã®ã¬ã€ã€ãŒã®ãµãã¢ãžã¥ãŒã«ã®ãã¹ãŠãã©ãã«è¡ããã®è©³çްã瀺ãå¿
èŠã¯ãããŸãããäŸãã°ã次ã®ããã€ã¹ãããã¯T0ppã«é©ããŠããŸãïŒGPUã¡ã¢ãªãããå ŽåïŒ:
```python
device_map = {"shared": 0, "encoder": 0, "decoder": 1, "lm_head": 1}
```
ã¢ãã«ã®ã¡ã¢ãªãžã®åœ±é¿ãæå°éã«æãããã 1 ã€ã®æ¹æ³ã¯ãäœç²ŸåºŠã® dtype (`torch.float16` ãªã©) ã§ã¢ãã«ãã€ã³ã¹ã¿ã³ã¹åãããã以äžã§èª¬æããçŽæ¥éååææ³ã䜿çšããããšã§ãã
### Model Instantiation dtype
Pytorch ã§ã¯ãã¢ãã«ã¯éåžž `torch.float32` 圢åŒã§ã€ã³ã¹ã¿ã³ã¹åãããŸããããã¯ãããããšãããšåé¡ã«ãªãå¯èœæ§ããããŸã
éã¿ã fp16 ã«ããã¢ãã«ãããŒããããšã2 åã®ã¡ã¢ãªãå¿
èŠã«ãªãããã§ãããã®å¶éãå
æããã«ã¯ã次ã®ããšãã§ããŸãã
`torch_dtype` åŒæ°ã䜿çšããŠãç®çã® `dtype` ãæç€ºçã«æž¡ããŸãã
```python
model = T5ForConditionalGeneration.from_pretrained("t5", torch_dtype=torch.float16)
```
ãŸãã¯ãã¢ãã«ãåžžã«æé©ãªã¡ã¢ãª ãã¿ãŒã³ã§ããŒããããå Žåã¯ãç¹å¥ãªå€ `"auto"` ã䜿çšã§ããŸãã
ãããŠã`dtype` ã¯ã¢ãã«ã®éã¿ããèªåçã«å°åºãããŸãã
```python
model = T5ForConditionalGeneration.from_pretrained("t5", torch_dtype="auto")
```
ã¹ã¯ã©ããããã€ã³ã¹ã¿ã³ã¹åãããã¢ãã«ã«ã¯ãã©ã® `dtype` ã䜿çšããããæç€ºããããšãã§ããŸãã
```python
config = T5Config.from_pretrained("t5")
model = AutoModel.from_config(config)
```
Pytorch ã®èšèšã«ããããã®æ©èœã¯æµ®åå°æ°ç¹ dtype ã§ã®ã¿äœ¿çšã§ããŸãã
## ModuleUtilsMixin
[[autodoc]] modeling_utils.ModuleUtilsMixin
## TFPreTrainedModel
[[autodoc]] TFPreTrainedModel
- push_to_hub
- all
## TFModelUtilsMixin
[[autodoc]] modeling_tf_utils.TFModelUtilsMixin
## FlaxPreTrainedModel
[[autodoc]] FlaxPreTrainedModel
- push_to_hub
- all
## Pushing to the Hub
[[autodoc]] utils.PushToHubMixin
## Sharded checkpoints
[[autodoc]] modeling_utils.load_sharded_checkpoint
| 0 |
hf_public_repos/transformers/docs/source/ja | hf_public_repos/transformers/docs/source/ja/main_classes/pipelines.md | <!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Pipelines
ãã€ãã©ã€ã³ã¯ãæšè«ã«ã¢ãã«ã䜿ãããã®ç°¡åã§åªããæ¹æ³ã§ããããã€ãã©ã€ã³ã¯ãè€éãªã³ãŒãã®ã»ãšãã©ãæœè±¡åãããªããžã§ã¯ãã§ãã
ãã€ãã©ã€ã³ã¯ãã©ã€ãã©ãªããè€éãªã³ãŒãã®ã»ãšãã©ãæœè±¡åãããªããžã§ã¯ãã§ãååä»ãåºæè¡šçŸèªèããã¹ã¯èšèªã¢ããªã³ã°ãææ
åæãç¹åŸŽæœåºã質åå¿çãªã©ã®ã¿ã¹ã¯ã«ç¹åããã·ã³ãã«ãªAPIãæäŸããŸãã
RecognitionãMasked Language ModelingãSentiment AnalysisãFeature ExtractionãQuestion Answeringãªã©ã®ã¿ã¹ã¯ã«ç¹åããã·ã³ãã«ãªAPIãæäŸããŸãã以äžãåç
§ã®ããšã
[ã¿ã¹ã¯æŠèŠ](../task_summary)ãåç
§ããŠãã ããã
ãã€ãã©ã€ã³ã®æœè±¡åã«ã¯2ã€ã®ã«ããŽãªãŒãããïŒ
- [`pipeline`] ã¯ãä»ã®ãã¹ãŠã®ãã€ãã©ã€ã³ãã«ãã»ã«åããæã匷åãªãªããžã§ã¯ãã§ãã
- ã¿ã¹ã¯åºæã®ãã€ãã©ã€ã³ã¯ã[ãªãŒãã£ãª](#audio)ã[ã³ã³ãã¥ãŒã¿ãŒ ããžã§ã³](#computer-vision)ã[èªç¶èšèªåŠç](#natural-language-processing)ãããã³ [ãã«ãã¢ãŒãã«](#multimodal) ã¿ã¹ã¯ã§äœ¿çšã§ããŸãã
## The pipeline abstraction
*ãã€ãã©ã€ã³* æœè±¡åã¯ãä»ã®ãã¹ãŠã®å©çšå¯èœãªãã€ãã©ã€ã³ã®ã©ãããŒã§ããä»ã®ãã®ãšåæ§ã«ã€ã³ã¹ã¿ã³ã¹åãããŸã
ãã€ãã©ã€ã³ã§ããããããªãçæŽ»ã®è³ªãæäŸã§ããŸãã
1 ã€ã®é
ç®ã«å¯ŸããåçŽãªåŒã³åºã:
```python
>>> pipe = pipeline("text-classification")
>>> pipe("This restaurant is awesome")
[{'label': 'POSITIVE', 'score': 0.9998743534088135}]
```
[ãã](https://huggingface.co) ã®ç¹å®ã®ã¢ãã«ã䜿çšãããå Žåã¯ãã¢ãã«ããªã³ã«ãªã£ãŠããå Žåã¯ã¿ã¹ã¯ãç¡èŠã§ããŸãã
ããã¯ãã§ã«ãããå®çŸ©ããŠããŸãã
```python
>>> pipe = pipeline(model="roberta-large-mnli")
>>> pipe("This restaurant is awesome")
[{'label': 'NEUTRAL', 'score': 0.7313136458396912}]
```
å€ãã®é
ç®ã«å¯ŸããŠãã€ãã©ã€ã³ãåŒã³åºãã«ã¯ã*list* ã䜿çšããŠãã€ãã©ã€ã³ãåŒã³åºãããšãã§ããŸãã
```python
>>> pipe = pipeline("text-classification")
>>> pipe(["This restaurant is awesome", "This restaurant is awful"])
[{'label': 'POSITIVE', 'score': 0.9998743534088135},
{'label': 'NEGATIVE', 'score': 0.9996669292449951}]
```
å®å
šãªããŒã¿ã»ãããå埩ããã«ã¯ã`Dataset`ãçŽæ¥äœ¿çšããããšããå§ãããŸããããã¯ãå²ãåœãŠãå¿
èŠããªãããšãæå³ããŸã
ããŒã¿ã»ããå
šäœãäžåºŠã«åŠçããããšããèªåã§ãããåŠçãè¡ãå¿
èŠããããŸãããããã¯ã«ã¹ã¿ã ã«ãŒããšåããããéãåäœããã¯ãã§ãã
GPUããããåé¡ã§ãªãå Žåã¯ããããããã«åé¡ãäœæããŠãã ããã
```python
import datasets
from transformers import pipeline
from transformers.pipelines.pt_utils import KeyDataset
from tqdm.auto import tqdm
pipe = pipeline("automatic-speech-recognition", model="facebook/wav2vec2-base-960h", device=0)
dataset = datasets.load_dataset("superb", name="asr", split="test")
# KeyDataset (only *pt*) will simply return the item in the dict returned by the dataset item
# as we're not interested in the *target* part of the dataset. For sentence pair use KeyPairDataset
for out in tqdm(pipe(KeyDataset(dataset, "file"))):
print(out)
# {"text": "NUMBER TEN FRESH NELLY IS WAITING ON YOU GOOD NIGHT HUSBAND"}
# {"text": ....}
# ....
```
䜿ããããããããã«ããžã§ãã¬ãŒã¿ãŒã䜿çšããããšãã§ããŸãã
```python
from transformers import pipeline
pipe = pipeline("text-classification")
def data():
while True:
# This could come from a dataset, a database, a queue or HTTP request
# in a server
# Caveat: because this is iterative, you cannot use `num_workers > 1` variable
# to use multiple threads to preprocess data. You can still have 1 thread that
# does the preprocessing while the main runs the big inference
yield "This is a test"
for out in pipe(data()):
print(out)
# {"text": "NUMBER TEN FRESH NELLY IS WAITING ON YOU GOOD NIGHT HUSBAND"}
# {"text": ....}
# ....
```
[[autodoc]] pipeline
## Pipeline batching
ãã¹ãŠã®ãã€ãã©ã€ã³ã§ãããåŠçã䜿çšã§ããŸããããã¯ããŸããããŸã
ãã€ãã©ã€ã³ãã¹ããªãŒãã³ã°æ©èœã䜿çšãããšãã¯åžžã« (ã€ãŸãããªã¹ãã`dataset`ããŸã㯠`generator`ãæž¡ããšã)ã
```python
from transformers import pipeline
from transformers.pipelines.pt_utils import KeyDataset
import datasets
dataset = datasets.load_dataset("imdb", name="plain_text", split="unsupervised")
pipe = pipeline("text-classification", device=0)
for out in pipe(KeyDataset(dataset, "text"), batch_size=8, truncation="only_first"):
print(out)
# [{'label': 'POSITIVE', 'score': 0.9998743534088135}]
# Exactly the same output as before, but the content are passed
# as batches to the model
```
<Tip warning={true}>
ãã ããããã«ãã£ãŠããã©ãŒãã³ã¹ãèªåçã«åäžããããã§ã¯ãããŸãããç¶æ³ã«å¿ããŠã10 åã®é«éåãŸã㯠5 åã®äœéåã®ããããã«ãªããŸãã
ããŒããŠã§ã¢ãããŒã¿ã䜿çšãããŠããå®éã®ã¢ãã«ã«ã€ããŠã
äž»ã«é«éåã§ããäŸ:
</Tip>
```python
from transformers import pipeline
from torch.utils.data import Dataset
from tqdm.auto import tqdm
pipe = pipeline("text-classification", device=0)
class MyDataset(Dataset):
def __len__(self):
return 5000
def __getitem__(self, i):
return "This is a test"
dataset = MyDataset()
for batch_size in [1, 8, 64, 256]:
print("-" * 30)
print(f"Streaming batch_size={batch_size}")
for out in tqdm(pipe(dataset, batch_size=batch_size), total=len(dataset)):
pass
```
```
# On GTX 970
------------------------------
Streaming no batching
100%|ââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââ| 5000/5000 [00:26<00:00, 187.52it/s]
------------------------------
Streaming batch_size=8
100%|âââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââ| 5000/5000 [00:04<00:00, 1205.95it/s]
------------------------------
Streaming batch_size=64
100%|âââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââ| 5000/5000 [00:02<00:00, 2478.24it/s]
------------------------------
Streaming batch_size=256
100%|âââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââ| 5000/5000 [00:01<00:00, 2554.43it/s]
(diminishing returns, saturated the GPU)
```
æãé床ãäœäžããäŸ:
```python
class MyDataset(Dataset):
def __len__(self):
return 5000
def __getitem__(self, i):
if i % 64 == 0:
n = 100
else:
n = 1
return "This is a test" * n
```
ããã¯ãä»ã®æã«æ¯ã¹ãŠéåžžã«é·ãæãææãããŸãããã®å Žåã**å
šäœ**ã®ããã㯠400 ã§ããå¿
èŠããããŸãã
ããŒã¯ã³ãé·ãããããããå
šäœã [64, 4] ã§ã¯ãªã [64, 400] ã«ãªããé床ã倧å¹
ã«äœäžããŸããããã«æªãããšã«ã
ãããã倧ãããªããšãããã°ã©ã ã¯åçŽã«ã¯ã©ãã·ã¥ããŸãã
```
------------------------------
Streaming no batching
100%|âââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââ| 1000/1000 [00:05<00:00, 183.69it/s]
------------------------------
Streaming batch_size=8
100%|âââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââ| 1000/1000 [00:03<00:00, 265.74it/s]
------------------------------
Streaming batch_size=64
100%|ââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââ| 1000/1000 [00:26<00:00, 37.80it/s]
------------------------------
Streaming batch_size=256
0%| | 0/1000 [00:00<?, ?it/s]
Traceback (most recent call last):
File "/home/nicolas/src/transformers/test.py", line 42, in <module>
for out in tqdm(pipe(dataset, batch_size=256), total=len(dataset)):
....
q = q / math.sqrt(dim_per_head) # (bs, n_heads, q_length, dim_per_head)
RuntimeError: CUDA out of memory. Tried to allocate 376.00 MiB (GPU 0; 3.95 GiB total capacity; 1.72 GiB already allocated; 354.88 MiB free; 2.46 GiB reserved in total by PyTorch)
```
ãã®åé¡ã«å¯Ÿããé©å㪠(äžè¬çãª) 解決çã¯ãªãã䜿çšã§ããè·é¢ã¯ãŠãŒã¹ã±ãŒã¹ã«ãã£ãŠç°ãªãå ŽåããããŸããã®ã«ãŒã«
芪æïŒ
ãŠãŒã¶ãŒã«ãšã£ãŠã®çµéšåã¯æ¬¡ã®ãšããã§ãã
- **ããŒããŠã§ã¢ã䜿çšããŠãè² è·ã«å¯Ÿããããã©ãŒãã³ã¹ã枬å®ããŸããæž¬ã£ãŠã枬ã£ãŠã枬ãç¶ããã宿°ãšããã®ã¯ã
é²ãã¹ãå¯äžã®æ¹æ³ã**
- ã¬ã€ãã³ã·ã«å¶çŽãããå Žå (å®éã®è£œåãæšè«ãå®è¡ããŠããå Žå)ããããåŠçãè¡ããªãã§ãã ããã
- CPU ã䜿çšããŠããå Žåã¯ããããåŠçãè¡ããªãã§ãã ããã
- GPU ã§ã¹ã«ãŒãããã䜿çšããŠããå Žå (倧éã®éçããŒã¿ã§ã¢ãã«ãå®è¡ãããå Žå)ãæ¬¡ã®ããã«ããŸãã
- sequence_length (ãèªç¶ãªãããŒã¿) ã®ãµã€ãºã«ã€ããŠãŸã£ããããããªãå Žåã¯ãããã©ã«ãã§ã¯ãããåŠçãæž¬å®ãè¡ããã
æ«å®çã«è¿œå ããŠã¿ãŸãã倱æããå Žåã«å埩ããããã« OOM ãã§ãã¯ã远å ããŸã (倱æããå Žåã¯ãããæç¹ã§å埩ããŸã)ã
sequence_length ãå¶åŸ¡ããŸãã)
- sequence_length ãéåžžã«èŠåçã§ããå ŽåããããåŠçã¯éåžžã«è峿·±ããã®ãšãªãå¯èœæ§ãé«ããæž¬å®ããŠããã·ã¥ããŠãã ããã
OOM ãçºçãããŸã§ç¶ããŸãã
- GPU ã倧ããã»ã©ããããåŠçãããè峿·±ããã®ã«ãªãå¯èœæ§ãé«ããªããŸãã
- ãããåŠçãæå¹ã«ãããããã«ãOOM ãé©åã«åŠçã§ããããšã確èªããŠãã ããã
## Pipeline chunk batching
`zero-shot-classification` ãš `question-answering` ã¯ãåäžã®å
¥åã§çµæãåŸãããå¯èœæ§ããããšããæå³ã§ãå°ãç¹æ®ã§ãã
ã¢ãã«ã®è€æ°ã®åæ¹ãã¹ãéåžžã®ç¶æ³ã§ã¯ãããã«ãã `batch_size` åŒæ°ã«é¢ããåé¡ãçºçããŸãã
ãã®åé¡ãåé¿ããããã«ããããã®ãã€ãã©ã€ã³ã¯ã©ã¡ããå°ãç¹æ®ã«ãªã£ãŠããã代ããã« `ChunkPipeline` ã«ãªã£ãŠããŸãã
éåžžã® `Pipeline`ãèŠããã«ïŒ
```python
preprocessed = pipe.preprocess(inputs)
model_outputs = pipe.forward(preprocessed)
outputs = pipe.postprocess(model_outputs)
```
ä»ã¯æ¬¡ã®ããã«ãªããŸã:
```python
all_model_outputs = []
for preprocessed in pipe.preprocess(inputs):
model_outputs = pipe.forward(preprocessed)
all_model_outputs.append(model_outputs)
outputs = pipe.postprocess(all_model_outputs)
```
ãã€ãã©ã€ã³ã¯ä»¥äžã§äœ¿çšããããããããã¯ã³ãŒãã«å¯ŸããŠéåžžã«ééçã§ããå¿
èŠããããŸãã
åãæ¹æ³ã
ãã€ãã©ã€ã³ã¯ããããèªåçã«åŠçã§ãããããããã¯ç°¡ç¥åããããã¥ãŒã§ããæ°ã«ããå¿
èŠã¯ãªããšããæå³ã§ã
å
¥åãå®éã«ããªã¬ãŒããåæ¹ãã¹ã®æ°ã«ã€ããŠã¯ã`batch_size` ãæé©åã§ããŸãã
å
¥åãšã¯ç¬ç«ããŠãåã®ã»ã¯ã·ã§ã³ã®æ³šæäºé
ãåŒãç¶ãé©çšãããŸãã
## Pipeline custom code
ç¹å®ã®ãã€ãã©ã€ã³ããªãŒããŒã©ã€ãããå Žåã
ç®ã®åã®ã¿ã¹ã¯ã«é¢ããåé¡ãäœæããããšãèºèºããªãã§ãã ããããã€ãã©ã€ã³ã®ç®æšã¯ã䜿ãããããã»ãšãã©ã®ãŠãŒã¶ãŒããµããŒãããããšã§ãã
ãããã£ãŠã`transformers`ãããªãã®ãŠãŒã¹ã±ãŒã¹ããµããŒãããå¯èœæ§ããããŸãã
åçŽã«è©ŠããŠã¿ããå Žåã¯ã次ã®ããšãã§ããŸãã
- éžæãããã€ãã©ã€ã³ããµãã¯ã©ã¹åããŸã
```python
class MyPipeline(TextClassificationPipeline):
def postprocess():
# Your code goes here
scores = scores * 100
# And here
my_pipeline = MyPipeline(model=model, tokenizer=tokenizer, ...)
# or if you use *pipeline* function, then:
my_pipeline = pipeline(model="xxxx", pipeline_class=MyPipeline)
```
ããã«ãããå¿
èŠãªã«ã¹ã¿ã ã³ãŒãããã¹ãŠå®è¡ã§ããããã«ãªããŸãã
## Implementing a pipeline
[Implementing a new pipeline](../add_new_pipeline)
## Audio
ãªãŒãã£ãª ã¿ã¹ã¯ã«äœ¿çšã§ãããã€ãã©ã€ã³ã«ã¯æ¬¡ã®ãã®ããããŸãã
### AudioClassificationPipeline
[[autodoc]] AudioClassificationPipeline
- __call__
- all
### AutomaticSpeechRecognitionPipeline
[[autodoc]] AutomaticSpeechRecognitionPipeline
- __call__
- all
### TextToAudioPipeline
[[autodoc]] TextToAudioPipeline
- __call__
- all
### ZeroShotAudioClassificationPipeline
[[autodoc]] ZeroShotAudioClassificationPipeline
- __call__
- all
## Computer vision
ã³ã³ãã¥ãŒã¿ãŒ ããžã§ã³ ã¿ã¹ã¯ã«äœ¿çšã§ãããã€ãã©ã€ã³ã«ã¯æ¬¡ã®ãã®ããããŸãã
### DepthEstimationPipeline
[[autodoc]] DepthEstimationPipeline
- __call__
- all
### ImageClassificationPipeline
[[autodoc]] ImageClassificationPipeline
- __call__
- all
### ImageSegmentationPipeline
[[autodoc]] ImageSegmentationPipeline
- __call__
- all
### ImageToImagePipeline
[[autodoc]] ImageToImagePipeline
- __call__
- all
### ObjectDetectionPipeline
[[autodoc]] ObjectDetectionPipeline
- __call__
- all
### VideoClassificationPipeline
[[autodoc]] VideoClassificationPipeline
- __call__
- all
### ZeroShotImageClassificationPipeline
[[autodoc]] ZeroShotImageClassificationPipeline
- __call__
- all
### ZeroShotObjectDetectionPipeline
[[autodoc]] ZeroShotObjectDetectionPipeline
- __call__
- all
## Natural Language Processing
èªç¶èšèªåŠçã¿ã¹ã¯ã«äœ¿çšã§ãããã€ãã©ã€ã³ã«ã¯æ¬¡ã®ãã®ããããŸãã
### ConversationalPipeline
[[autodoc]] Conversation
[[autodoc]] ConversationalPipeline
- __call__
- all
### FillMaskPipeline
[[autodoc]] FillMaskPipeline
- __call__
- all
### NerPipeline
[[autodoc]] NerPipeline
詳现ã«ã€ããŠã¯ã[`TokenClassificationPipeline`] ãåç
§ããŠãã ããã
### QuestionAnsweringPipeline
[[autodoc]] QuestionAnsweringPipeline
- __call__
- all
### SummarizationPipeline
[[autodoc]] SummarizationPipeline
- __call__
- all
### TableQuestionAnsweringPipeline
[[autodoc]] TableQuestionAnsweringPipeline
- __call__
### TextClassificationPipeline
[[autodoc]] TextClassificationPipeline
- __call__
- all
### TextGenerationPipeline
[[autodoc]] TextGenerationPipeline
- __call__
- all
### Text2TextGenerationPipeline
[[autodoc]] Text2TextGenerationPipeline
- __call__
- all
### TokenClassificationPipeline
[[autodoc]] TokenClassificationPipeline
- __call__
- all
### TranslationPipeline
[[autodoc]] TranslationPipeline
- __call__
- all
### ZeroShotClassificationPipeline
[[autodoc]] ZeroShotClassificationPipeline
- __call__
- all
## Multimodal
ãã«ãã¢ãŒãã« ã¿ã¹ã¯ã«äœ¿çšã§ãããã€ãã©ã€ã³ã«ã¯æ¬¡ã®ãã®ããããŸãã
### DocumentQuestionAnsweringPipeline
[[autodoc]] DocumentQuestionAnsweringPipeline
- __call__
- all
### FeatureExtractionPipeline
[[autodoc]] FeatureExtractionPipeline
- __call__
- all
### ImageToTextPipeline
[[autodoc]] ImageToTextPipeline
- __call__
- all
### VisualQuestionAnsweringPipeline
[[autodoc]] VisualQuestionAnsweringPipeline
- __call__
- all
## Parent class: `Pipeline`
[[autodoc]] Pipeline
| 0 |
hf_public_repos/transformers/docs/source/ja | hf_public_repos/transformers/docs/source/ja/main_classes/keras_callbacks.md | <!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Keras callbacks
Keras ã䜿çšã㊠Transformers ã¢ãã«ããã¬ãŒãã³ã°ããå Žåãäžè¬çãªåŠçãèªååããããã«äœ¿çšã§ããã©ã€ãã©ãªåºæã®ã³ãŒã«ããã¯ãããã€ããããŸãã
ã¿ã¹ã¯:
## KerasMetricCallback
[[autodoc]] KerasMetricCallback
## PushToHubCallback
[[autodoc]] PushToHubCallback
| 0 |
hf_public_repos/transformers/docs/source/ja | hf_public_repos/transformers/docs/source/ja/main_classes/output.md | <!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Model outputs
ãã¹ãŠã®ã¢ãã«ã«ã¯ã[`~utils.ModelOutput`] ã®ãµãã¯ã©ã¹ã®ã€ã³ã¹ã¿ã³ã¹ã§ããåºåããããŸãããããã¯
ã¢ãã«ã«ãã£ãŠè¿ããããã¹ãŠã®æ
å ±ãå«ãããŒã¿æ§é ã§ãããã¿ãã«ãŸãã¯
èŸæžã
ãããã©ã®ããã«ãªãããäŸã§èŠãŠã¿ãŸãããã
```python
from transformers import BertTokenizer, BertForSequenceClassification
import torch
tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")
model = BertForSequenceClassification.from_pretrained("bert-base-uncased")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
labels = torch.tensor([1]).unsqueeze(0) # Batch size 1
outputs = model(**inputs, labels=labels)
```
`outputs`ãªããžã§ã¯ãã¯[`~modeling_outputs.SequenceClassifierOutput`]ã§ããã
ããã¯ããªãã·ã§ã³ã§ `loss`ã`logits`ããªãã·ã§ã³ã§ `hidden_states`ããªãã·ã§ã³ã§ `attentions` 屿§ãæã€ããšãæå³ããŸãã
ãªãã·ã§ã³ã® `attentions` 屿§ãæã€ããšãæå³ãããããã§ã¯ã`labels`ãæž¡ããã®ã§`loss`ããããã`hidden_states`ãš`attentions`ã¯ãªãã
`output_hidden_states=True`ã`output_attentions=True`ãæž¡ããŠããªãã®ã§ã`hidden_states`ãš`attentions`ã¯ãªãã
`output_attentions=True`ãæž¡ããªãã£ãããã ã
<Tip>
`output_hidden_states=True`ãæž¡ããšã`outputs.hidden_states[-1]`ã `outputs.last_hidden_states` ãšæ£ç¢ºã«äžèŽããããšãæåŸ
ãããããããªãã
ããããå¿
ããããããªããšã¯éããŸãããã¢ãã«ã«ãã£ãŠã¯ãæåŸã«é ãããç¶æ
ãè¿ããããšãã«ãæ£èŠåããã®åŸã®åŠçãé©çšãããã®ããããŸãã
</Tip>
éåžžãšåãããã«å屿§ã«ã¢ã¯ã»ã¹ã§ããŸãããã®å±æ§ãã¢ãã«ããè¿ãããªãã£ãå Žåã¯ã
㯠`None`ãååŸããŸããããã§ãããšãã°`outputs.loss`ã¯ã¢ãã«ã«ãã£ãŠèšç®ãããæå€±ã§ããã`outputs.attentions`ã¯
`None`ã
`outputs`ãªããžã§ã¯ããã¿ãã«ãšããŠèããå Žåã`None`å€ãæããªã屿§ã®ã¿ãèæ
®ãããŸãã
ããšãã°ãããã«ã¯ 2 ã€ã®èŠçŽ ã`loss`ãæ¬¡ã«`logits`ããããŸãã
```python
outputs[:2]
```
ããšãã°ãã¿ãã« `(outputs.loss, Outputs.logits)` ãè¿ããŸãã
`outputs`ãªããžã§ã¯ããèŸæžãšããŠèæ
®ããå ŽåããNoneããæããªã屿§ã®ã¿ãèæ
®ãããŸãã
䟡å€èгãããšãã°ãããã«ã¯`loss` ãš `logits`ãšãã 2 ã€ã®ããŒããããŸãã
ããã§ã¯ãè€æ°ã®ã¢ãã« ã¿ã€ãã§äœ¿çšãããæ±çšã¢ãã«ã®åºåãææžåããŸããå
·äœçãªåºåã¿ã€ãã¯æ¬¡ã®ãšããã§ãã
察å¿ããã¢ãã«ã®ããŒãžã«èšèŒãããŠããŸãã
## ModelOutput
[[autodoc]] utils.ModelOutput
- to_tuple
## BaseModelOutput
[[autodoc]] modeling_outputs.BaseModelOutput
## BaseModelOutputWithPooling
[[autodoc]] modeling_outputs.BaseModelOutputWithPooling
## BaseModelOutputWithCrossAttentions
[[autodoc]] modeling_outputs.BaseModelOutputWithCrossAttentions
## BaseModelOutputWithPoolingAndCrossAttentions
[[autodoc]] modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions
## BaseModelOutputWithPast
[[autodoc]] modeling_outputs.BaseModelOutputWithPast
## BaseModelOutputWithPastAndCrossAttentions
[[autodoc]] modeling_outputs.BaseModelOutputWithPastAndCrossAttentions
## Seq2SeqModelOutput
[[autodoc]] modeling_outputs.Seq2SeqModelOutput
## CausalLMOutput
[[autodoc]] modeling_outputs.CausalLMOutput
## CausalLMOutputWithCrossAttentions
[[autodoc]] modeling_outputs.CausalLMOutputWithCrossAttentions
## CausalLMOutputWithPast
[[autodoc]] modeling_outputs.CausalLMOutputWithPast
## MaskedLMOutput
[[autodoc]] modeling_outputs.MaskedLMOutput
## Seq2SeqLMOutput
[[autodoc]] modeling_outputs.Seq2SeqLMOutput
## NextSentencePredictorOutput
[[autodoc]] modeling_outputs.NextSentencePredictorOutput
## SequenceClassifierOutput
[[autodoc]] modeling_outputs.SequenceClassifierOutput
## Seq2SeqSequenceClassifierOutput
[[autodoc]] modeling_outputs.Seq2SeqSequenceClassifierOutput
## MultipleChoiceModelOutput
[[autodoc]] modeling_outputs.MultipleChoiceModelOutput
## TokenClassifierOutput
[[autodoc]] modeling_outputs.TokenClassifierOutput
## QuestionAnsweringModelOutput
[[autodoc]] modeling_outputs.QuestionAnsweringModelOutput
## Seq2SeqQuestionAnsweringModelOutput
[[autodoc]] modeling_outputs.Seq2SeqQuestionAnsweringModelOutput
## Seq2SeqSpectrogramOutput
[[autodoc]] modeling_outputs.Seq2SeqSpectrogramOutput
## SemanticSegmenterOutput
[[autodoc]] modeling_outputs.SemanticSegmenterOutput
## ImageClassifierOutput
[[autodoc]] modeling_outputs.ImageClassifierOutput
## ImageClassifierOutputWithNoAttention
[[autodoc]] modeling_outputs.ImageClassifierOutputWithNoAttention
## DepthEstimatorOutput
[[autodoc]] modeling_outputs.DepthEstimatorOutput
## Wav2Vec2BaseModelOutput
[[autodoc]] modeling_outputs.Wav2Vec2BaseModelOutput
## XVectorOutput
[[autodoc]] modeling_outputs.XVectorOutput
## Seq2SeqTSModelOutput
[[autodoc]] modeling_outputs.Seq2SeqTSModelOutput
## Seq2SeqTSPredictionOutput
[[autodoc]] modeling_outputs.Seq2SeqTSPredictionOutput
## SampleTSPredictionOutput
[[autodoc]] modeling_outputs.SampleTSPredictionOutput
## TFBaseModelOutput
[[autodoc]] modeling_tf_outputs.TFBaseModelOutput
## TFBaseModelOutputWithPooling
[[autodoc]] modeling_tf_outputs.TFBaseModelOutputWithPooling
## TFBaseModelOutputWithPoolingAndCrossAttentions
[[autodoc]] modeling_tf_outputs.TFBaseModelOutputWithPoolingAndCrossAttentions
## TFBaseModelOutputWithPast
[[autodoc]] modeling_tf_outputs.TFBaseModelOutputWithPast
## TFBaseModelOutputWithPastAndCrossAttentions
[[autodoc]] modeling_tf_outputs.TFBaseModelOutputWithPastAndCrossAttentions
## TFSeq2SeqModelOutput
[[autodoc]] modeling_tf_outputs.TFSeq2SeqModelOutput
## TFCausalLMOutput
[[autodoc]] modeling_tf_outputs.TFCausalLMOutput
## TFCausalLMOutputWithCrossAttentions
[[autodoc]] modeling_tf_outputs.TFCausalLMOutputWithCrossAttentions
## TFCausalLMOutputWithPast
[[autodoc]] modeling_tf_outputs.TFCausalLMOutputWithPast
## TFMaskedLMOutput
[[autodoc]] modeling_tf_outputs.TFMaskedLMOutput
## TFSeq2SeqLMOutput
[[autodoc]] modeling_tf_outputs.TFSeq2SeqLMOutput
## TFNextSentencePredictorOutput
[[autodoc]] modeling_tf_outputs.TFNextSentencePredictorOutput
## TFSequenceClassifierOutput
[[autodoc]] modeling_tf_outputs.TFSequenceClassifierOutput
## TFSeq2SeqSequenceClassifierOutput
[[autodoc]] modeling_tf_outputs.TFSeq2SeqSequenceClassifierOutput
## TFMultipleChoiceModelOutput
[[autodoc]] modeling_tf_outputs.TFMultipleChoiceModelOutput
## TFTokenClassifierOutput
[[autodoc]] modeling_tf_outputs.TFTokenClassifierOutput
## TFQuestionAnsweringModelOutput
[[autodoc]] modeling_tf_outputs.TFQuestionAnsweringModelOutput
## TFSeq2SeqQuestionAnsweringModelOutput
[[autodoc]] modeling_tf_outputs.TFSeq2SeqQuestionAnsweringModelOutput
## FlaxBaseModelOutput
[[autodoc]] modeling_flax_outputs.FlaxBaseModelOutput
## FlaxBaseModelOutputWithPast
[[autodoc]] modeling_flax_outputs.FlaxBaseModelOutputWithPast
## FlaxBaseModelOutputWithPooling
[[autodoc]] modeling_flax_outputs.FlaxBaseModelOutputWithPooling
## FlaxBaseModelOutputWithPastAndCrossAttentions
[[autodoc]] modeling_flax_outputs.FlaxBaseModelOutputWithPastAndCrossAttentions
## FlaxSeq2SeqModelOutput
[[autodoc]] modeling_flax_outputs.FlaxSeq2SeqModelOutput
## FlaxCausalLMOutputWithCrossAttentions
[[autodoc]] modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions
## FlaxMaskedLMOutput
[[autodoc]] modeling_flax_outputs.FlaxMaskedLMOutput
## FlaxSeq2SeqLMOutput
[[autodoc]] modeling_flax_outputs.FlaxSeq2SeqLMOutput
## FlaxNextSentencePredictorOutput
[[autodoc]] modeling_flax_outputs.FlaxNextSentencePredictorOutput
## FlaxSequenceClassifierOutput
[[autodoc]] modeling_flax_outputs.FlaxSequenceClassifierOutput
## FlaxSeq2SeqSequenceClassifierOutput
[[autodoc]] modeling_flax_outputs.FlaxSeq2SeqSequenceClassifierOutput
## FlaxMultipleChoiceModelOutput
[[autodoc]] modeling_flax_outputs.FlaxMultipleChoiceModelOutput
## FlaxTokenClassifierOutput
[[autodoc]] modeling_flax_outputs.FlaxTokenClassifierOutput
## FlaxQuestionAnsweringModelOutput
[[autodoc]] modeling_flax_outputs.FlaxQuestionAnsweringModelOutput
## FlaxSeq2SeqQuestionAnsweringModelOutput
[[autodoc]] modeling_flax_outputs.FlaxSeq2SeqQuestionAnsweringModelOutput
| 0 |
hf_public_repos/transformers/docs/source/ja | hf_public_repos/transformers/docs/source/ja/main_classes/processors.md | <!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Processors
Transformers ã©ã€ãã©ãªã§ã¯ãããã»ããµã¯ 2 ã€ã®ç°ãªãæå³ãæã¡ãŸãã
- [Wav2Vec2](../model_doc/wav2vec2) ãªã©ã®ãã«ãã¢ãŒãã« ã¢ãã«ã®å
¥åãååŠçãããªããžã§ã¯ã (é³å£°ãšããã¹ã)
ãŸã㯠[CLIP](../model_doc/clip) (ããã¹ããšããžã§ã³)
- å€ãããŒãžã§ã³ã®ã©ã€ãã©ãªã§ GLUE ãŸã㯠SQUAD ã®ããŒã¿ãååŠçããããã«äœ¿çšãããŠãããªããžã§ã¯ãã¯éæšå¥šã«ãªããŸããã
## Multi-modal processors
ãã«ãã¢ãŒãã« ã¢ãã«ã§ã¯ããªããžã§ã¯ããè€æ°ã®ã¢ããªã㣠(ããã¹ãã
èŠèŠãšé³å£°ïŒãããã¯ã2 ã€ä»¥äžã®åŠçãªããžã§ã¯ããã°ã«ãŒãåããããã»ããµãŒãšåŒã°ãããªããžã§ã¯ãã«ãã£ãŠåŠçãããŸãã
ããŒã¯ãã€ã¶ãŒ (ããã¹ã ã¢ããªãã£çš)ãç»åããã»ããµãŒ (èŠèŠçš)ãç¹åŸŽæœåºåš (ãªãŒãã£ãªçš) ãªã©ã
ãããã®ããã»ããµã¯ãä¿åããã³ããŒãæ©èœãå®è£
ããæ¬¡ã®åºæ¬ã¯ã©ã¹ãç¶æ¿ããŸãã
[[autodoc]] ProcessorMixin
## Deprecated processors
ãã¹ãŠã®ããã»ããµã¯ãåãã¢ãŒããã¯ãã£ã«åŸã£ãŠããŸãã
[`~data.processors.utils.DataProcessor`]ãããã»ããµã¯æ¬¡ã®ãªã¹ããè¿ããŸãã
[`~data.processors.utils.InputExample`]ãããã
[`~data.processors.utils.InputExample`] ã¯æ¬¡ã®ããã«å€æã§ããŸãã
[`~data.processors.utils.Input features`] ãã¢ãã«ã«ãã£ãŒãããŸãã
[[autodoc]] data.processors.utils.DataProcessor
[[autodoc]] data.processors.utils.InputExample
[[autodoc]] data.processors.utils.InputFeatures
## GLUE
[äžè¬èšèªçè§£è©äŸ¡ (GLUE)](https://gluebenchmark.com/) ã¯ã
æ¢åã® NLU ã¿ã¹ã¯ã®å€æ§ãªã»ããã«ãããã¢ãã«ã®ããã©ãŒãã³ã¹ãçŽãšåæçºå£²ããã [GLUE: A
èªç¶èšèªçè§£ã®ããã®ãã«ãã¿ã¹ã¯ãã³ãããŒã¯ããã³åæãã©ãããã©ãŒã ](https://openreview.net/pdf?id=rJ4km2R5t7)
ãã®ã©ã€ãã©ãªã¯ãMRPCãMNLIãMNLI (äžäžèŽ)ãCoLAãSST2ãSTSBã
QQPãQNLIãRTEãWNLIã
ãããã®ããã»ããµã¯æ¬¡ã®ãšããã§ãã
- [`~data.processors.utils.MrpcProcessor`]
- [`~data.processors.utils.MnliProcessor`]
- [`~data.processors.utils.MnliMismatchedProcessor`]
- [`~data.processors.utils.Sst2Processor`]
- [`~data.processors.utils.StsbProcessor`]
- [`~data.processors.utils.QqpProcessor`]
- [`~data.processors.utils.QnliProcessor`]
- [`~data.processors.utils.RteProcessor`]
- [`~data.processors.utils.WnliProcessor`]
ããã«ã次ã®ã¡ãœããã䜿çšããŠãããŒã¿ ãã¡ã€ã«ããå€ãããŒããããããããªã¹ãã«å€æããããšãã§ããŸãã
[`~data.processors.utils.InputExample`]ã
[[autodoc]] data.processors.glue.glue_convert_examples_to_features
## XNLI
[ã¯ãã¹ãªã³ã¬ã« NLI ã³ãŒãã¹ (XNLI)](https://www.nyu.edu/projects/bowman/xnli/) ã¯ã
èšèªãè¶
ããããã¹ã衚çŸã®å質ã XNLI ã¯ã[*MultiNLI*](http://www.nyu.edu/projects/bowman/multinli/) ã«åºã¥ãã¯ã©ãŠããœãŒã¹ã®ããŒã¿ã»ããã§ããããã¹ãã®ãã¢ã«ã¯ã15 åã®ããã¹ã嫿ã¢ãããŒã·ã§ã³ãã©ãã«ä»ããããŠããŸãã
ããŸããŸãªèšèª (è±èªãªã©ã®é«ãªãœãŒã¹èšèªãšã¹ã¯ããªèªãªã©ã®äœãªãœãŒã¹èšèªã®äž¡æ¹ãå«ã)ã
è«æ [XNLI: Evaluating Cross-lingual Sentence Representations](https://arxiv.org/abs/1809.05053) ãšåæã«ãªãªãŒã¹ãããŸããã
ãã®ã©ã€ãã©ãªã¯ãXNLI ããŒã¿ãããŒãããããã»ããµããã¹ãããŸãã
- [`~data.processors.utils.XnliProcessor`]
ãã¹ãã»ããã«ã¯ãŽãŒã«ãã©ãã«ãä»ããŠãããããè©äŸ¡ã¯ãã¹ãã»ããã§è¡ãããŸãã®ã§ãäºæ¿ãã ããã
ãããã®ããã»ããµã䜿çšããäŸã¯ã[run_xnli.py](https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification/run_xnli.py) ã¹ã¯ãªããã«ç€ºãããŠããŸãã
## SQuAD
[The Stanford Question Answering Dataset (SQuAD)](https://rajpurkar.github.io/SQuAD-explorer//) ã¯ã次ã®ãã³ãããŒã¯ã§ãã
質åå¿çã«é¢ããã¢ãã«ã®ããã©ãŒãã³ã¹ãè©äŸ¡ããŸãã v1.1 ãš v2.0 ã® 2 ã€ã®ããŒãžã§ã³ãå©çšå¯èœã§ããæåã®ããŒãžã§ã³
(v1.1) ã¯ãè«æ [SQuAD: 100,000+ question for Machine Comprehension of Text](https://arxiv.org/abs/1606.05250) ãšãšãã«ãªãªãŒã¹ãããŸããã 2 çªç®ã®ããŒãžã§ã³ (v2.0) ã¯ãè«æ [Know What You Don't ãšåæã«ãªãªãŒã¹ãããŸããã
ç¥ã£ãŠããã¹ã: SQuAD ã®çããããªã質å](https://arxiv.org/abs/1806.03822)ã
ãã®ã©ã€ãã©ãªã¯ã次㮠2 ã€ã®ããŒãžã§ã³ã®ããããã®ããã»ããµããã¹ãããŸãã
### Processors
ãããã®ããã»ããµã¯æ¬¡ã®ãšããã§ãã
- [`~data.processors.utils.SquadV1Processor`]
- [`~data.processors.utils.SquadV2Processor`]
ã©ã¡ããæœè±¡ã¯ã©ã¹ [`~data.processors.utils.SquadProcessor`] ãç¶æ¿ããŠããŸãã
[[autodoc]] data.processors.squad.SquadProcessor
- all
ããã«ã次ã®ã¡ãœããã䜿çšããŠãSQuAD ã®äŸã次ã®åœ¢åŒã«å€æã§ããŸãã
ã¢ãã«ã®å
¥åãšããŠäœ¿çšã§ãã [`~data.processors.utils.SquadFeatures`]ã
[[autodoc]] data.processors.squad.squad_convert_examples_to_features
ãããã®ããã»ããµãšåè¿°ã®æ¹æ³ã¯ãããŒã¿ãå«ããã¡ã€ã«ã ãã§ãªãã
*tensorflow_datasets* ããã±ãŒãžã以äžã«äŸã瀺ããŸãã
### Example usage
以äžã«ããã»ããµã䜿çšããäŸãšãããŒã¿ ãã¡ã€ã«ã䜿çšããå€ææ¹æ³ã瀺ããŸãã
```python
# Loading a V2 processor
processor = SquadV2Processor()
examples = processor.get_dev_examples(squad_v2_data_dir)
# Loading a V1 processor
processor = SquadV1Processor()
examples = processor.get_dev_examples(squad_v1_data_dir)
features = squad_convert_examples_to_features(
examples=examples,
tokenizer=tokenizer,
max_seq_length=max_seq_length,
doc_stride=args.doc_stride,
max_query_length=max_query_length,
is_training=not evaluate,
)
```
*tensorflow_datasets* ã®äœ¿çšã¯ãããŒã¿ ãã¡ã€ã«ã䜿çšããã®ãšåããããç°¡åã§ãã
```python
# tensorflow_datasets only handle Squad V1.
tfds_examples = tfds.load("squad")
examples = SquadV1Processor().get_examples_from_dataset(tfds_examples, evaluate=evaluate)
features = squad_convert_examples_to_features(
examples=examples,
tokenizer=tokenizer,
max_seq_length=max_seq_length,
doc_stride=args.doc_stride,
max_query_length=max_query_length,
is_training=not evaluate,
)
```
ãããã®ããã»ããµã䜿çšããå¥ã®äŸã¯ã[run_squad.py](https://github.com/huggingface/transformers/tree/main/examples/legacy/question-answering/run_squad.py) ã¹ã¯ãªããã«ç€ºãããŠããŸãã
| 0 |
hf_public_repos/transformers/docs/source/ja | hf_public_repos/transformers/docs/source/ja/main_classes/trainer.md | <!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Trainer
[`Trainer`] ã¯ã©ã¹ã¯ãã»ãšãã©ã®æšæºçãªãŠãŒã¹ã±ãŒã¹ã«å¯ŸããŠãPyTorch ã§æ©èœãå®å
šã«ãã¬ãŒãã³ã°ããããã® API ãæäŸããŸããããã¯ã[ãµã³ãã« ã¹ã¯ãªãã](https://github.com/huggingface/transformers/tree/main/examples) ã®ã»ãšãã©ã§äœ¿çšãããŠããŸãã
[`Trainer`] ãã€ã³ã¹ã¿ã³ã¹åããåã«ããã¬ãŒãã³ã°äžã«ã«ã¹ã¿ãã€ãºã®ãã¹ãŠã®ãã€ã³ãã«ã¢ã¯ã»ã¹ããããã« [`TrainingArguments`] ãäœæããŸãã
ãã® API ã¯ãè€æ°ã® GPU/TPU ã§ã®åæ£ãã¬ãŒãã³ã°ã[NVIDIA Apex](https://github.com/NVIDIA/apex) ããã³ PyTorch ã®ãã€ãã£ã AMP ã«ããæ··å粟床ããµããŒãããŸãã
[`Trainer`] ã«ã¯ãäžèšã®æ©èœããµããŒãããåºæ¬çãªãã¬ãŒãã³ã° ã«ãŒããå«ãŸããŠããŸããã«ã¹ã¿ã åäœãæ¿å
¥ããã«ã¯ããããããµãã¯ã©ã¹åããæ¬¡ã®ã¡ãœããããªãŒããŒã©ã€ãããŸãã
- **get_train_dataloader** -- ãã¬ãŒãã³ã° ããŒã¿ããŒããŒãäœæããŸãã
- **get_eval_dataloader** -- è©äŸ¡çšããŒã¿ããŒããŒãäœæããŸãã
- **get_test_dataloader** -- ãã¹ã ããŒã¿ããŒããŒãäœæããŸãã
- **log** -- ãã¬ãŒãã³ã°ãç£èŠããŠããããŸããŸãªãªããžã§ã¯ãã«é¢ããæ
å ±ããã°ã«èšé²ããŸãã
- **create_optimizer_and_scheduler** -- ãªããã£ãã€ã¶ãšåŠç¿çã¹ã±ãžã¥ãŒã©ãæž¡ãããªãã£ãå Žåã«ã»ããã¢ããããŸãã
åæåã `create_optimizer`ã¡ãœãããš`create_scheduler`ã¡ãœããããµãã¯ã©ã¹åãŸãã¯ãªãŒããŒã©ã€ãããããšãã§ããããšã«æ³šæããŠãã ããã
å¥ã
ã«ã
- **create_optimizer** -- init ã§æž¡ãããªãã£ãå Žåã«ãªããã£ãã€ã¶ãŒãã»ããã¢ããããŸãã
- **create_scheduler** -- init ã§æž¡ãããªãã£ãå ŽåãåŠç¿çã¹ã±ãžã¥ãŒã©ãèšå®ããŸãã
- **compute_loss** - ãã¬ãŒãã³ã°å
¥åã®ãããã®æå€±ãèšç®ããŸãã
- **training_step** -- ãã¬ãŒãã³ã° ã¹ããããå®è¡ããŸãã
- **prediction_step** -- è©äŸ¡/ãã¹ã ã¹ããããå®è¡ããŸãã
- **evaluate** -- è©äŸ¡ã«ãŒããå®è¡ããã¡ããªã¯ã¹ãè¿ããŸãã
- **predict** -- ãã¹ã ã»ããã®äºæž¬ (ã©ãã«ã䜿çšå¯èœãªå Žåã¯ã¡ããªã¯ã¹ãå«ã) ãè¿ããŸãã
<Tip warning={true}>
[`Trainer`] ã¯ã©ã¹ã¯ ð€ Transformers ã¢ãã«çšã«æé©åãããŠãããé©ãã¹ãåäœãããå¯èœæ§ããããŸã
ä»ã®æ©çš®ã§äœ¿çšããå Žåãç¬èªã®ã¢ãã«ã§äœ¿çšããå Žåã¯ã次ã®ç¹ã確èªããŠãã ããã
- ã¢ãã«ã¯åžžã« [`~utils.ModelOutput`] ã®ã¿ãã«ãŸãã¯ãµãã¯ã©ã¹ãè¿ããŸãã
- `labels` åŒæ°ãæå®ããããã®æå€±ãæåã®å€ãšããŠè¿ãããå Žåãã¢ãã«ã¯æå€±ãèšç®ã§ããŸãã
ã¿ãã«ã®èŠçŽ (ã¢ãã«ãã¿ãã«ãè¿ãå Žå)
- ã¢ãã«ã¯è€æ°ã®ã©ãã«åŒæ°ãåãå
¥ããããšãã§ããŸã ([`TrainingArguments`] ã§ `label_names` ã䜿çšããŠããã®ååã [`Trainer`] ã«ç€ºããŸã) ãããããã®ãããã«ã `"label"` ãšããååãä»ããå¿
èŠã¯ãããŸããã
</Tip>
以äžã¯ãå éæå€±ã䜿çšããããã« [`Trainer`] ãã«ã¹ã¿ãã€ãºããæ¹æ³ã®äŸã§ã (äžåè¡¡ãªãã¬ãŒãã³ã° ã»ãããããå Žåã«åœ¹ç«ã¡ãŸã)ã
```python
from torch import nn
from transformers import Trainer
class CustomTrainer(Trainer):
def compute_loss(self, model, inputs, return_outputs=False):
labels = inputs.pop("labels")
# forward pass
outputs = model(**inputs)
logits = outputs.get("logits")
# compute custom loss (suppose one has 3 labels with different weights)
loss_fct = nn.CrossEntropyLoss(weight=torch.tensor([1.0, 2.0, 3.0], device=model.device))
loss = loss_fct(logits.view(-1, self.model.config.num_labels), labels.view(-1))
return (loss, outputs) if return_outputs else loss
```
PyTorch [`Trainer`] ã®ãã¬ãŒãã³ã° ã«ãŒãã®åäœãã«ã¹ã¿ãã€ãºãããã 1 ã€ã®æ¹æ³ã¯ããã¬ãŒãã³ã° ã«ãŒãã®ç¶æ
ãæ€æ»ã§ãã [callbacks](ã³ãŒã«ããã¯) ã䜿çšããããšã§ã (é²è¡ç¶æ³ã¬ããŒããTensorBoard ãŸãã¯ä»ã® ML ãã©ãããã©ãŒã ã§ã®ãã°èšé²ãªã©)ãæ±ºå®ïŒæ©æåæ¢ãªã©ïŒã
## Trainer
[[autodoc]] Trainer
- all
## Seq2SeqTrainer
[[autodoc]] Seq2SeqTrainer
- evaluate
- predict
## TrainingArguments
[[autodoc]] TrainingArguments
- all
## Seq2SeqTrainingArguments
[[autodoc]] Seq2SeqTrainingArguments
- all
## Checkpoints
ããã©ã«ãã§ã¯ã[`Trainer`] ã¯ãã¹ãŠã®ãã§ãã¯ãã€ã³ããã
[`TrainingArguments`] ã䜿çšããŠããŸãããããã¯ãxxx ãå«ã`checkpoint-xxx`ãšããååã®ãµããã©ã«ããŒã«ä¿åãããŸãã
ããã¯ãã¬ãŒãã³ã°ã®æ®µéã§ããã
ãã§ãã¯ãã€ã³ããããã¬ãŒãã³ã°ãåéããã«ã¯ã次ã®ããããã䜿çšã㊠[`Trainer.train`] ãåŒã³åºããŸãã
- `resume_from_checkpoint=True` ã¯ææ°ã®ãã§ãã¯ãã€ã³ããããã¬ãŒãã³ã°ãåéããŸã
- `resume_from_checkpoint=checkpoint_dir` ãã£ã¬ã¯ããªå
ã®ç¹å®ã®ãã§ãã¯ãã€ã³ããããã¬ãŒãã³ã°ãåéããŸã
åæ Œããã
ããã«ã`push_to_hub=True` ã䜿çšãããšãã¢ãã« ããã«ãã§ãã¯ãã€ã³ããç°¡åã«ä¿åã§ããŸããããã©ã«ãã§ã¯ããã¹ãŠ
äžéãã§ãã¯ãã€ã³ãã«ä¿åãããã¢ãã«ã¯å¥ã®ã³ãããã«ä¿åãããŸããããªããã£ãã€ã¶ãŒã®ç¶æ
ã¯ä¿åãããŸãããé©å¿ã§ããŸã
[`TrainingArguments`] ã® `hub-strategy` å€ã次ã®ããããã«ããŸãã
- `"checkpoint"`: ææ°ã®ãã§ãã¯ãã€ã³ãã last-checkpoint ãšããååã®ãµããã©ã«ããŒã«ããã·ã¥ãããŸãã
`trainer.train(resume_from_checkpoint="output_dir/last-checkpoint")` ã䜿çšããŠãã¬ãŒãã³ã°ãç°¡åã«åéããŸãã
- `"all_checkpoints"`: ãã¹ãŠã®ãã§ãã¯ãã€ã³ãã¯ãåºåãã©ã«ããŒã«è¡šç€ºãããããã«ããã·ã¥ãããŸã (ãããã£ãŠã1 ã€ã®ãã§ãã¯ãã€ã³ããåŸãããŸã)
æçµãªããžããªå
ã®ãã©ã«ããŒããšã®ãã§ãã¯ãã€ã³ã ãã©ã«ããŒ)
## Logging
ããã©ã«ãã§ã¯ã[`Trainer`] ã¯ã¡ã€ã³ããã»ã¹ã« `logging.INFO` ã䜿çšããã¬ããªã«ãããå Žåã«ã¯ `logging.WARNING` ã䜿çšããŸãã
ãããã®ããã©ã«ãã¯ã[`TrainingArguments`] ã® 5 ã€ã® `logging` ã¬ãã«ã®ããããã䜿çšããããã«ãªãŒããŒã©ã€ãã§ããŸãã
åŒæ°:
- `log_level` - ã¡ã€ã³ããã»ã¹çš
- `log_level_replica` - ã¬ããªã«çš
ããã«ã[`TrainingArguments`] ã® `log_on_each_node` ã `False` ã«èšå®ãããŠããå Žåãã¡ã€ã³ ããŒãã®ã¿ã
ã¡ã€ã³ ããã»ã¹ã®ãã° ã¬ãã«èšå®ã䜿çšãããšãä»ã®ãã¹ãŠã®ããŒãã¯ã¬ããªã«ã®ãã° ã¬ãã«èšå®ã䜿çšããŸãã
[`Trainer`] ã¯ã`transformers` ã®ãã° ã¬ãã«ãããŒãããšã«åå¥ã«èšå®ããããšã«æ³šæããŠãã ããã
[`Trainer.__init__`]ããããã£ãŠãä»ã®æ©èœãå©çšããå Žåã¯ããããããæ©ãèšå®ããããšããå§ãããŸã (次ã®äŸãåç
§)ã
[`Trainer`] ãªããžã§ã¯ããäœæããåã® `transformers` æ©èœã
ãããã¢ããªã±ãŒã·ã§ã³ã§äœ¿çšããæ¹æ³ã®äŸã次ã«ç€ºããŸãã
```python
[...]
logger = logging.getLogger(__name__)
# Setup logging
logging.basicConfig(
format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
datefmt="%m/%d/%Y %H:%M:%S",
handlers=[logging.StreamHandler(sys.stdout)],
)
# set the main code and the modules it uses to the same log-level according to the node
log_level = training_args.get_process_log_level()
logger.setLevel(log_level)
datasets.utils.logging.set_verbosity(log_level)
transformers.utils.logging.set_verbosity(log_level)
trainer = Trainer(...)
```
ãããŠãã¡ã€ã³ ããŒããšä»ã®ãã¹ãŠã®ããŒãã§éè€ããå¯èœæ§ãé«ããã®ãåºåããªãããã«èŠåããã ãã衚瀺ãããå Žåã¯ã
èŠå: 次ã®ããã«å®è¡ã§ããŸãã
```bash
my_app.py ... --log_level warning --log_level_replica error
```
ãã«ãããŒãç°å¢ã§ãåããŒãã®ã¡ã€ã³ããã»ã¹ã®ãã°ãç¹°ãè¿ããããªãå Žåã¯ã次ã®ããã«ããŸãã
äžèšã次ã®ããã«å€æŽããŸãã
```bash
my_app.py ... --log_level warning --log_level_replica error --log_on_each_node 0
```
ãã®åŸãæåã®ããŒãã®ã¡ã€ã³ ããã»ã¹ã®ã¿ããèŠåãã¬ãã«ã§ãã°ã«èšé²ãããã¡ã€ã³ ããŒãäžã®ä»ã®ãã¹ãŠã®ããã»ã¹ã¯ãã°ã«èšé²ãããŸãã
ããŒããšä»ã®ããŒãäžã®ãã¹ãŠã®ããã»ã¹ã¯ããšã©ãŒãã¬ãã«ã§ãã°ã«èšé²ãããŸãã
ã¢ããªã±ãŒã·ã§ã³ãã§ããã ãéãã«ããå¿
èŠãããå Žåã¯ã次ã®ããã«ããŸãã
```bash
my_app.py ... --log_level error --log_level_replica error --log_on_each_node 0
```
(ãã«ãããŒãç°å¢ã®å Žå㯠`--log_on_each_node 0` ã远å ããŸã)
## Randomness
[`Trainer`] ã«ãã£ãŠçæããããã§ãã¯ãã€ã³ãããåéããå Žåããã¹ãŠã®åªåããã®ç¶æ
ã埩å
ããããã«è¡ãããŸãã
_python_ã_numpy_ãããã³ _pytorch_ ã® RNG ç¶æ
ã¯ããã®ãã§ãã¯ãã€ã³ããä¿åããæç¹ãšåãç¶æ
ã«ãªããŸãã
ããã«ãããã忢ããŠåéããšããã¹ã¿ã€ã«ã®ãã¬ãŒãã³ã°ãããã³ã¹ããããã¬ãŒãã³ã°ã«å¯èœãªéãè¿ã¥ããããã¯ãã§ãã
ãã ããããŸããŸãªããã©ã«ãã®é決å®ç㪠pytorch èšå®ã«ãããããã¯å®å
šã«æ©èœããªãå¯èœæ§ããããŸãããã«ããåžæã®å Žåã¯
決å®è«ã«ã€ããŠã¯ã[ã©ã³ãã æ§ã®ãœãŒã¹ã®å¶åŸ¡](https://pytorch.org/docs/stable/notes/randomness) ãåç
§ããŠãã ãããããã¥ã¡ã³ãã§èª¬æãããŠããããã«ããããã®èšå®ã®äžéšã¯
ç©äºã決å®è«çã«ãããã® (äŸ: `torch.backends.cudnn.deterministic`) ã¯ç©äºãé
ãããå¯èœæ§ããããããããã¯
ããã©ã«ãã§ã¯å®è¡ã§ããŸããããå¿
èŠã«å¿ããŠèªåã§æå¹ã«ããããšãã§ããŸãã
## Specific GPUs Selection
ã©ã® GPU ãã©ã®ãããªé åºã§äœ¿çšããããããã°ã©ã ã«æç€ºããæ¹æ³ã«ã€ããŠèª¬æããŸãã
[`DistributedDataParallel`](https://pytorch.org/docs/stable/generated/torch.nn.Parallel.DistributedDataParallel.html) ã䜿çšã㊠GPU ã®ãµãã»ããã®ã¿ã䜿çšããå Žåã䜿çšãã GPU ã®æ°ãæå®ããã ãã§ãã ãããšãã°ãGPU ã 4 ã€ããããæåã® 2 ã€ã䜿çšãããå Žåã¯ã次ã®ããã«ããŸãã
```bash
torchrun --nproc_per_node=2 trainer-program.py ...
```
[`accelerate`](https://github.com/huggingface/accelerate) ãŸã㯠[`deepspeed`](https://github.com/microsoft/DeepSpeed) ãã€ã³ã¹ããŒã«ãããŠããå Žåã¯ã次ã䜿çšããŠåãããšãéæããããšãã§ããŸããã®äžã€ïŒ
```bash
accelerate launch --num_processes 2 trainer-program.py ...
```
```bash
deepspeed --num_gpus 2 trainer-program.py ...
```
ãããã®ã©ã³ãã£ãŒã䜿çšããããã«ãAccelerate ãŸã㯠[Deepspeed çµ±å](deepspeed) æ©èœã䜿çšããå¿
èŠã¯ãããŸããã
ãããŸã§ã¯ãããã°ã©ã ã«äœ¿çšãã GPU ã®æ°ãæç€ºã§ããŸãããæ¬¡ã«ãç¹å®ã® GPU ãéžæãããã®é åºãå¶åŸ¡ããæ¹æ³ã«ã€ããŠèª¬æããŸãã
次ã®ç°å¢å€æ°ã¯ã䜿çšãã GPU ãšãã®é åºãå¶åŸ¡ããã®ã«åœ¹ç«ã¡ãŸãã
**`CUDA_VISIBLE_DEVICES`**
è€æ°ã® GPU ãããããã®ãã¡ã® 1 ã€ãŸãã¯ããã€ãã® GPU ã ãã䜿çšãããå Žåã¯ãç°å¢å€æ° `CUDA_VISIBLE_DEVICES` ã䜿çšãã GPU ã®ãªã¹ãã«èšå®ããŸãã
ããšãã°ã4 ã€ã® GPU (0ã1ã2ã3) ããããšããŸããç©ç GPU 0 ãš 2 ã®ã¿ã§å®è¡ããã«ã¯ã次ã®ããã«ããŸãã
```bash
CUDA_VISIBLE_DEVICES=0,2 torchrun trainer-program.py ...
```
ãããã£ãŠãpytorch 㯠2 ã€ã® GPU ã®ã¿ãèªèããç©ç GPU 0 ãš 2 ã¯ãããã `cuda:0` ãš `cuda:1` ã«ãããã³ã°ãããŸãã
é åºã倿Žããããšãã§ããŸãã
```bash
CUDA_VISIBLE_DEVICES=2,0 torchrun trainer-program.py ...
```
ããã§ã¯ãç©ç GPU 0 ãš 2 ããããã`cuda:1`ãš`cuda:0`ã«ãããã³ã°ãããŠããŸãã
äžèšã®äŸã¯ãã¹ãŠ `DistributedDataParallel` 䜿çšãã¿ãŒã³ã®ãã®ã§ãããåãæ¹æ³ã [`DataParallel`](https://pytorch.org/docs/stable/generated/torch.nn.DataParallel.html) ã§ãæ©èœããŸãã
```bash
CUDA_VISIBLE_DEVICES=2,0 python trainer-program.py ...
```
GPU ã®ãªãç°å¢ããšãã¥ã¬ãŒãããã«ã¯ã次ã®ããã«ãã®ç°å¢å€æ°ã空ã®å€ã«èšå®ããã ãã§ãã
```bash
CUDA_VISIBLE_DEVICES= python trainer-program.py ...
```
ä»ã®ç°å¢å€æ°ãšåæ§ã«ãããããã³ãã³ã ã©ã€ã³ã«è¿œå ãã代ããã«ã次ã®ããã«ãšã¯ã¹ããŒãããããšãã§ããŸãã
```bash
export CUDA_VISIBLE_DEVICES=0,2
torchrun trainer-program.py ...
```
ãã ãããã®æ¹æ³ã§ã¯ã以åã«ç°å¢å€æ°ãèšå®ããããšãå¿ããŠããªãééã£ã GPU ã䜿çšãããŠããã®ãçè§£ã§ããªãå¯èœæ§ããããããæ··ä¹±ãæãå¯èœæ§ããããŸãããããã£ãŠããã®ã»ã¯ã·ã§ã³ã®ã»ãšãã©ã®äŸã§ç€ºãããŠããããã«ãåãã³ãã³ã ã©ã€ã³ã§ç¹å®ã®å®è¡ã«å¯ŸããŠã®ã¿ç°å¢å€æ°ãèšå®ããã®ãäžè¬çã§ãã
**`CUDA_DEVICE_ORDER`**
ç©çããã€ã¹ã®é åºãå¶åŸ¡ãã远å ã®ç°å¢å€æ° `CUDA_DEVICE_ORDER` ããããŸããéžæè¢ã¯æ¬¡ã® 2 ã€ã§ãã
1. PCIe ãã¹ ID é (`nvidia-smi` ã®é åºãšäžèŽ) - ãããããã©ã«ãã§ãã
```bash
export CUDA_DEVICE_ORDER=PCI_BUS_ID
```
2. GPU ã³ã³ãã¥ãŒãã£ã³ã°èœåé ã«äžŠã¹ã
```bash
export CUDA_DEVICE_ORDER=FASTEST_FIRST
```
ã»ãšãã©ã®å Žåããã®ç°å¢å€æ°ãæ°ã«ããå¿
èŠã¯ãããŸããããå€ã GPU ãšæ°ãã GPU ãç©ççã«æ¿å
¥ãããŠãããããé
ãå€ãã«ãŒããé
ããªã£ãŠããããã«èŠãããããªåã£ãã»ããã¢ãããè¡ã£ãŠããå Žåã«ã¯ãéåžžã«åœ¹ç«ã¡ãŸããåããããã解決ãã 1 ã€ã®æ¹æ³ã¯ãã«ãŒãã亀æããããšã§ãããã ããã«ãŒãã亀æã§ããªãå Žå (ããã€ã¹ã®å·åŽã圱é¿ãåããå Žåãªã©)ã`CUDA_DEVICE_ORDER=FASTEST_FIRST`ãèšå®ãããšãåžžã«æ°ããé«éã«ãŒããæåã«é
眮ãããŸãããã ãã`nvidia-smi`ã¯äŸç¶ãšã㊠PCIe ã®é åºã§ã¬ããŒããããããå€å°æ··ä¹±ããã§ãããã
é åºãå
¥ãæ¿ãããã 1 ã€ã®è§£æ±ºçã¯ã以äžã䜿çšããããšã§ãã
```bash
export CUDA_VISIBLE_DEVICES=1,0
```
ãã®äŸã§ã¯ 2 ã€ã® GPU ã ãã䜿çšããŠããŸããããã¡ãããã³ã³ãã¥ãŒã¿ãŒã«æèŒãããŠããæ°ã® GPU ã«ãåãããšãåœãŠã¯ãŸããŸãã
ãŸãããã®ç°å¢å€æ°ãèšå®ããå Žåã¯ã`~/.bashrc` ãã¡ã€ã«ãŸãã¯ãã®ä»ã®èµ·åèšå®ãã¡ã€ã«ã«èšå®ããŠãå¿ããã®ãæåã§ãã
## Trainer Integrations
[`Trainer`] ã¯ããã¬ãŒãã³ã°ãåçã«æ¹åããå¯èœæ§ã®ããã©ã€ãã©ãªããµããŒãããããã«æ¡åŒµãããŸããã
æéãšã¯ããã«å€§ããªã¢ãã«ã«é©åããŸãã
çŸåšããµãŒãããŒãã£ã®ãœãªã¥ãŒã·ã§ã³ [DeepSpeed](https://github.com/microsoft/DeepSpeed) ããã³ [PyTorch FSDP](https://pytorch.org/docs/stable/fsdp.html) ããµããŒãããŠããŸããè«æ [ZeRO: ã¡ã¢ãªã®æé©å]
å
ãã©ã¡ãŒã¿ ã¢ãã«ã®ãã¬ãŒãã³ã°ã«åããŠãSamyam RajbhandariãJeff RasleyãOlatunji RuwaseãYuxiong He è](https://arxiv.org/abs/1910.02054)ã
ãã®æäŸããããµããŒãã¯ããã®èšäºã®å·çæç¹ã§ã¯æ°ãããŠå®éšçãªãã®ã§ãã DeepSpeed ãš PyTorch FSDP ã®ãµããŒãã¯ã¢ã¯ãã£ãã§ãããããã«é¢ããåé¡ã¯æè¿ããŸãããFairScale çµ±å㯠PyTorch ã¡ã€ã³ã«çµ±åãããŠããããããããµããŒãããŠããŸãã ([PyTorch FSDP çµ±å](#pytorch-fully-sharded-data-parallel))
<a id='zero-install-notes'></a>
### CUDA Extension Installation Notes
ãã®èšäºã®å·çæç¹ã§ã¯ãDeepspeed ã䜿çšããã«ã¯ãCUDA C++ ã³ãŒããã³ã³ãã€ã«ããå¿
èŠããããŸãã
ãã¹ãŠã®ã€ã³ã¹ããŒã«ã®åé¡ã¯ã[Deepspeed](https://github.com/microsoft/DeepSpeed/issues) ã®å¯Ÿå¿ãã GitHub ã®åé¡ãéããŠå¯ŸåŠããå¿
èŠããããŸããããã«ãäžã«çºçããå¯èœæ§ã®ããäžè¬çãªåé¡ãããã€ããããŸãã
CUDA æ¡åŒµæ©èœãæ§ç¯ããå¿
èŠããã PyTorch æ¡åŒµæ©èœã
ãããã£ãŠãæ¬¡ã®æäœãå®è¡äžã« CUDA é¢é£ã®ãã«ãã®åé¡ãçºçããå Žåã¯ã次ã®ãšããã§ãã
```bash
pip install deepspeed
```
ãŸãæ¬¡ã®æ³šæäºé
ããèªã¿ãã ããã
ãããã®ããŒãã§ã¯ã`pytorch` ã CUDA `10.2` ã§ãã«ããããå Žåã«äœããã¹ããã®äŸã瀺ããŸããããªãã®ç¶æ³ã次ã®ãããªå Žå
ç°ãªãå Žåã¯ãããŒãžã§ã³çªå·ãç®çã®ããŒãžã§ã³ã«èª¿æŽããããšãå¿ããªãã§ãã ããã
#### Possible problem #1
Pytorch ã«ã¯ç¬èªã® CUDA ããŒã«ããããä»å±ããŠããŸãããããã 2 ã€ã®ãããžã§ã¯ãããã«ãããã«ã¯ãåäžããŒãžã§ã³ã® CUDA ãå¿
èŠã§ãã
ã·ã¹ãã å
šäœã«ã€ã³ã¹ããŒã«ãããŸãã
ããšãã°ãPython ç°å¢ã« `cudatoolkit==10.2` ãæå®ã㊠`pytorch` ãã€ã³ã¹ããŒã«ããå Žåã¯ã次ã®ãã®ãå¿
èŠã§ãã
CUDA `10.2` ãã·ã¹ãã å
šäœã«ã€ã³ã¹ããŒã«ãããŸããã
æ£ç¢ºãªå Žæã¯ã·ã¹ãã ã«ãã£ãŠç°ãªãå ŽåããããŸãããå€ãã®ã·ã¹ãã ã§ã¯`/usr/local/cuda-10.2`ãæãäžè¬çãªå Žæã§ãã
Unix ã·ã¹ãã ã CUDA ãæ£ããèšå®ããã`PATH`ç°å¢å€æ°ã«è¿œå ããããšã
次ã®ããã«ããŠã€ã³ã¹ããŒã«å Žæãæå®ããŸãã
```bash
which nvcc
```
CUDA ãã·ã¹ãã å
šäœã«ã€ã³ã¹ããŒã«ãããŠããªãå Žåã¯ãæåã«ã€ã³ã¹ããŒã«ããŠãã ããããæ°ã«å
¥ãã䜿çšããŠæé ãèŠã€ããããšãã§ããŸã
æ€çŽ¢ãšã³ãžã³ãããšãã°ãUbuntu ã䜿çšããŠããå Žåã¯ã[ubuntu cuda 10.2 install](https://www.google.com/search?q=ubuntu+cuda+10.2+install) ãæ€çŽ¢ãããšããã§ãããã
#### Possible problem #2
ãã 1 ã€ã®èããããäžè¬çãªåé¡ã¯ãã·ã¹ãã å
šäœã«è€æ°ã® CUDA ããŒã«ããããã€ã³ã¹ããŒã«ãããŠããå¯èœæ§ãããããšã§ããããšãã°ããªã
ãããå¯èœæ§ãããïŒ
```bash
/usr/local/cuda-10.2
/usr/local/cuda-11.0
```
ãã®ç¶æ³ã§ã¯ã`PATH` ããã³ `LD_LIBRARY_PATH` ç°å¢å€æ°ã«ä»¥äžãå«ãŸããŠããããšã確èªããå¿
èŠããããŸãã
ç®çã® CUDA ããŒãžã§ã³ãžã®æ£ãããã¹ãéåžžãããã±ãŒãž ã€ã³ã¹ããŒã©ãŒã¯ããããã«ã
æåŸã®ããŒãžã§ã³ãã€ã³ã¹ããŒã«ãããŸãããé©åãªããã±ãŒãžãèŠã€ãããªãããã«ããã±ãŒãžã®ãã«ãã倱æãããšããåé¡ãçºçããå Žåã¯ã
CUDA ããŒãžã§ã³ãã·ã¹ãã å
šäœã«ã€ã³ã¹ããŒã«ãããŠããã«ãããããããåè¿°ã® 2 ã€ã調æŽããå¿
èŠãããããšãæå³ããŸã
ç°å¢å€æ°ã
ãŸãããã®å
容ãèŠãŠã¿ãŸãããã
```bash
echo $PATH
echo $LD_LIBRARY_PATH
```
ããã§ãäžã«äœãå
¥ã£ãŠããããããããŸãã
`LD_LIBRARY_PATH` ã空ã§ããå¯èœæ§ããããŸãã
`PATH` ã¯å®è¡å¯èœãã¡ã€ã«ãååšããå Žæããªã¹ããã`LD_LIBRARY_PATH` ã¯å
±æã©ã€ãã©ãªã®å Žæã瀺ããŸãã
æ¢ãããšã§ããã©ã¡ãã®å Žåããåã®ãšã³ããªãåŸã®ãšã³ããªããåªå
ãããŸãã `:` ã¯è€æ°ãåºåãããã«äœ¿çšãããŸã
ãšã³ããªã
ããã§ããã«ã ããã°ã©ã ã«ç¹å®ã® CUDA ããŒã«ãããã®å Žæãæç€ºããã«ã¯ãæåã«ãªã¹ããããåžæã®ãã¹ãæ¿å
¥ããŸãã
ãã£ãŠããããšïŒ
```bash
export PATH=/usr/local/cuda-10.2/bin:$PATH
export LD_LIBRARY_PATH=/usr/local/cuda-10.2/lib64:$LD_LIBRARY_PATH
```
æ¢åã®å€ãäžæžãããã®ã§ã¯ãªããå
é ã«è¿œå ããããšã«æ³šæããŠãã ããã
ãã¡ãããå¿
èŠã«å¿ããŠããŒãžã§ã³çªå·ããã«ãã¹ã調æŽããŸããå²ãåœãŠããã£ã¬ã¯ããªãå®éã«æ©èœããããšã確èªããŠãã ãã
ååšããã `lib64` ãµããã£ã¬ã¯ããªã¯ã`libcudart.so` ãªã©ã®ããŸããŸãª CUDA `.so` ãªããžã§ã¯ããååšããå Žæã§ãã
ã·ã¹ãã ã§ã¯å¥ã®ååãä»ããããŸãããçŸå®ãåæ ããããã«èª¿æŽããŠãã ããã
#### Possible problem #3
äžéšã®å€ã CUDA ããŒãžã§ã³ã¯ãæ°ããã³ã³ãã€ã©ã§ã®ãã«ããæåŠããå ŽåããããŸããããšãã°ãããªãã¯`gcc-9`ãæã£ãŠããŸããããããå¿
èŠã§ã
`gcc-7`ã
ããã«ã¯ããŸããŸãªæ¹æ³ããããŸãã
ææ°ã® CUDA ããŒã«ããããã€ã³ã¹ããŒã«ã§ããå Žåã¯ãéåžžãæ°ããã³ã³ãã€ã©ããµããŒããããŠããã¯ãã§ãã
ãããã¯ãæ¢ã«ææããŠããã³ã³ãã€ã©ã«å ããŠãäžäœããŒãžã§ã³ã®ã³ã³ãã€ã©ãã€ã³ã¹ããŒã«ããããšãã§ããŸãã
ãã§ã«ååšããŸãããããã©ã«ãã§ã¯ãªãããããã«ãã·ã¹ãã ã¯ãããèªèã§ããŸããã ãgcc-7ããã€ã³ã¹ããŒã«ãããŠãããã
ãã«ãã·ã¹ãã ãèŠã€ãããªããšããã¡ãã»ãŒãžã衚瀺ããå Žåã¯ãæ¬¡ã®æ¹æ³ã§è§£æ±ºã§ããå¯èœæ§ããããŸãã
```bash
sudo ln -s /usr/bin/gcc-7 /usr/local/cuda-10.2/bin/gcc
sudo ln -s /usr/bin/g++-7 /usr/local/cuda-10.2/bin/g++
```
ããã§ã¯ã`/usr/local/cuda-10.2/bin/gcc` ãã `gcc-7` ãžã®ã·ã³ããªãã¯ãªã³ã¯ãäœæããŠããŸãã
`/usr/local/cuda-10.2/bin/` 㯠`PATH` ç°å¢å€æ°å
ã«ããå¿
èŠããããŸã (åã®åé¡ã®è§£æ±ºçãåç
§)ã
`gcc-7` (ããã³ `g++7`) ãèŠã€ããã¯ãã§ããã«ãã¯æåããŸãã
ãã€ãã®ããã«ãç¶æ³ã«åãããŠäŸã®ãã¹ãç·šéããŠãã ããã
### PyTorch Fully Sharded Data parallel
ãã倧ããªããã ãµã€ãºã§å·šå€§ãªã¢ãã«ã®ãã¬ãŒãã³ã°ãé«éåããã«ã¯ãå®å
šã«ã·ã£ãŒãåãããããŒã¿äžŠåã¢ãã«ã䜿çšã§ããŸãã
ãã®ã¿ã€ãã®ããŒã¿äžŠåãã©ãã€ã ã§ã¯ããªããã£ãã€ã¶ãŒã®ç¶æ
ãåŸé
ããã©ã¡ãŒã¿ãŒãã·ã£ãŒãã£ã³ã°ããããšã§ãããå€ãã®ããŒã¿ãšå€§èŠæš¡ãªã¢ãã«ããã£ããã£ã³ã°ã§ããŸãã
ãã®æ©èœãšãã®å©ç¹ã®è©³çްã«ã€ããŠã¯ã[å®å
šã·ã£ãŒãã£ã³ã° ããŒã¿äžŠåããã°](https://pytorch.org/blog/introducing-pytorch-full-sharded-data-Parallel-api/) ãã芧ãã ããã
ææ°ã® PyTorch ã® Fully Sharded Data Parallel (FSDP) ãã¬ãŒãã³ã°æ©èœãçµ±åããŸããã
å¿
èŠãªã®ã¯ãèšå®ãéããŠæå¹ã«ããããšã ãã§ãã
**FSDP ãµããŒãã«å¿
èŠãª PyTorch ããŒãžã§ã³**: PyTorch Nightly (ãªãªãŒã¹åŸã«ãããèªãã å Žå㯠1.12.0)
FSDP ãæå¹ã«ããã¢ãã«ã®ä¿åã¯ãæè¿ã®ä¿®æ£ã§ã®ã¿å©çšã§ããããã§ãã
**äœ¿çšæ³**ïŒ
- é
åžãããã©ã³ãã£ãŒã远å ãããŠããããšã確èªããŠãã ãã
ãŸã 䜿çšããŠããªãå Žåã¯ã`-m torch.distributed.launch --nproc_per_node=NUMBER_OF_GPUS_YOU_HAVE`ã䜿çšããŸãã
- **ã·ã£ãŒãã£ã³ã°æŠç¥**:
- FULL_SHARD : ããŒã¿äžŠåã¯ãŒã«ãŒ/GPU ã«ãããã·ã£ãŒã ãªããã£ãã€ã¶ãŒã®ç¶æ
+ åŸé
+ ã¢ãã« ãã©ã¡ãŒã¿ãŒã
ãã®ããã«ã¯ãã³ãã³ãã©ã€ã³åŒæ°ã«`--fsdp full_shard`ã远å ããŸãã
- SHARD_GRAD_OP : ã·ã£ãŒã ãªããã£ãã€ã¶ãŒã®ç¶æ
+ ããŒã¿äžŠåã¯ãŒã«ãŒ/GPU å
šäœã®åŸé
ã
ãã®ããã«ã¯ãã³ãã³ãã©ã€ã³åŒæ°ã«`--fsdp shard_grad_op`ã远å ããŸãã
- NO_SHARD : ã·ã£ãŒãã£ã³ã°ãªãããã®ããã«ã¯ãã³ãã³ãã©ã€ã³åŒæ°ã«`--fsdp no_shard`ã远å ããŸãã
- ãã©ã¡ãŒã¿ãšåŸé
ã CPU ã«ãªãããŒãããã«ã¯ã
ã³ãã³ãã©ã€ã³åŒæ°ã«`--fsdp "full_shard offload"`ãŸãã¯`--fsdp "shard_grad_op offload"`ã远å ããŸãã
- `default_auto_wrap_policy` ã䜿çšã㊠FSDP ã§ã¬ã€ã€ãŒãèªåçã«ååž°çã«ã©ããããã«ã¯ã
ã³ãã³ãã©ã€ã³åŒæ°ã«`--fsdp "full_shard auto_wrap"`ãŸãã¯`--fsdp "shard_grad_op auto_wrap"`ã远å ããŸãã
- CPU ãªãããŒããšèªåã©ããã³ã°ã®äž¡æ¹ãæå¹ã«ããã«ã¯ã
ã³ãã³ãã©ã€ã³åŒæ°ã«`--fsdp "full_shard offload auto_wrap"`ãŸãã¯`--fsdp "shard_grad_op offload auto_wrap"`ã远å ããŸãã
- æ®ãã® FSDP æ§æã¯ã`--fsdp_config <path_to_fsdp_config.json>`ãä»ããŠæž¡ãããŸããããã¯ã次ã®ããããã®å Žæã§ãã
FSDP json æ§æãã¡ã€ã« (äŸ: `fsdp_config.json`)ããŸãã¯ãã§ã«ããŒããããŠãã json ãã¡ã€ã«ã `dict` ãšããŠäœ¿çšããŸãã
- èªåã©ããã³ã°ãæå¹ãªå Žåã¯ããã©ã³ã¹ããŒã¹ã®èªåã©ãã ããªã·ãŒãŸãã¯ãµã€ãº ããŒã¹ã®èªåã©ãã ããªã·ãŒã䜿çšã§ããŸãã
- ãã©ã³ã¹ãã©ãŒããŒããŒã¹ã®èªåã©ããããªã·ãŒã®å Žåãæ§æãã¡ã€ã«ã§ `fsdp_transformer_layer_cls_to_wrap` ãæå®ããããšããå§ãããŸããæå®ããªãå Žåã䜿çšå¯èœãªå Žåãããã©ã«ãå€ã¯ `model._no_split_modules` ã«ãªããŸãã
ããã¯ãã©ãããããã©ã³ã¹ãã©ãŒããŒå±€ã¯ã©ã¹åã®ãªã¹ã (倧æåãšå°æåãåºå¥) ãæå®ããŸã (äŸ: [`BertLayer`]ã[`GPTJBlock`]ã[`T5Block`] ...)ã
éã¿ãå
±æãããµãã¢ãžã¥ãŒã« (åã蟌ã¿å±€ãªã©) ãç°ãªã FSDP ã©ããããããŠãããã«ãªããªãããã«ããå¿
èŠããããããããã¯éèŠã§ãã
ãã®ããªã·ãŒã䜿çšãããšããã«ãããã ã¢ãã³ã·ã§ã³ãšããã«ç¶ãããã€ãã® MLP ã¬ã€ã€ãŒãå«ããããã¯ããšã«ã©ããã³ã°ãçºçããŸãã
å
±æåã蟌ã¿ãå«ãæ®ãã®å±€ã¯ãåãæãå€åŽã® FSDP ãŠãããã«ã©ãããããã®ã䟿å©ã§ãã
ãããã£ãŠããã©ã³ã¹ããŒã¹ã®ã¢ãã«ã«ã¯ããã䜿çšããŠãã ããã
- ãµã€ãºããŒã¹ã®èªåã©ããããªã·ãŒã®å Žåã¯ãèšå®ãã¡ã€ã«ã«`fsdp_min_num_params`ã远å ããŠãã ããã
èªåã©ããã³ã°ã®ããã® FSDP ã®ãã©ã¡ãŒã¿ã®æå°æ°ãæå®ããŸãã
- èšå®ãã¡ã€ã«ã§ `fsdp_backward_prefetch` ãæå®ã§ããããã«ãªããŸãããæ¬¡ã®ãã©ã¡ãŒã¿ã®ã»ããããã€ããªãã§ããããããå¶åŸ¡ããŸãã
`backward_pre` ãš `backward_pos` ãå©çšå¯èœãªãªãã·ã§ã³ã§ãã
詳现ã«ã€ããŠã¯ã`torch.distributed.fsdp.full_sharded_data_Parallel.BackwardPrefetch`ãåç
§ããŠãã ããã
- èšå®ãã¡ã€ã«ã§ `fsdp_forward_prefetch` ãæå®ã§ããããã«ãªããŸãããæ¬¡ã®ãã©ã¡ãŒã¿ã®ã»ããããã€ããªãã§ããããããå¶åŸ¡ããŸãã
`True`ã®å ŽåãFSDP ã¯ãã©ã¯ãŒã ãã¹ã§ã®å®è¡äžã«ãæ¬¡ã«æ¥ããªãŒã«ã®ã£ã¶ãŒãæç€ºçã«ããªãã§ããããŸãã
- èšå®ãã¡ã€ã«ã§ `limit_all_gathers` ãæå®ã§ããããã«ãªããŸããã
`True`ã®å ŽåãFSDP 㯠CPU ã¹ã¬ãããæç€ºçã«åæããŠãå®è¡äžã®ãªãŒã«ã®ã£ã¶ãå€ãããã®ãé²ããŸãã
- `activation_checkpointing`ãèšå®ãã¡ã€ã«ã§æå®ã§ããããã«ãªããŸããã
`True`ã®å ŽåãFSDP ã¢ã¯ãã£ããŒã·ã§ã³ ãã§ãã¯ãã€ã³ãã¯ãFSDP ã®ã¢ã¯ãã£ããŒã·ã§ã³ãã¯ãªã¢ããããšã§ã¡ã¢ãªäœ¿çšéãåæžããææ³ã§ãã
ç¹å®ã®ã¬ã€ã€ãŒãåŠçããããã¯ã¯ãŒã ãã¹äžã«ããããåèšç®ããŸããäºå®äžãããã¯äœåãªèšç®æéãç ç²ã«ããŸã
ã¡ã¢ãªäœ¿çšéãåæžããŸãã
**泚æãã¹ã泚æç¹ãããã€ããããŸã**
- ãã㯠`generate` ãšäºææ§ããªãããã `--predict_with_generate` ãšãäºææ§ããããŸãã
ãã¹ãŠã® seq2seq/clm ã¹ã¯ãªãã (翻蚳/èŠçŽ/clm ãªã©)ã
åé¡ [#21667](https://github.com/huggingface/transformers/issues/21667) ãåç
§ããŠãã ããã
### PyTorch/XLA Fully Sharded Data parallel
TPU ãŠãŒã¶ãŒã®çæ§ã«æå ±ã§ãã PyTorch/XLA 㯠FSDP ããµããŒãããããã«ãªããŸããã
ææ°ã® Fully Sharded Data Parallel (FSDP) ãã¬ãŒãã³ã°ããã¹ãŠãµããŒããããŠããŸãã
詳现ã«ã€ããŠã¯ã[FSDP ã䜿çšãã Cloud TPU ã§ã® PyTorch ã¢ãã«ã®ã¹ã±ãŒãªã³ã°](https://pytorch.org/blog/scaling-pytorch-models-on-cloud-tpus-with-fsdp/) ããã³ [PyTorch/XLA å®è£
ãåç
§ããŠãã ããã FSDP ã®](https://github.com/pytorch/xla/tree/master/torch_xla/distributed/fsdp)
å¿
èŠãªã®ã¯ãèšå®ãéããŠæå¹ã«ããããšã ãã§ãã
**FSDP ãµããŒãã«å¿
èŠãª PyTorch/XLA ããŒãžã§ã³**: >=2.0
**äœ¿çšæ³**ïŒ
`--fsdp "full shard"` ãã`--fsdp_config <path_to_fsdp_config.json>` ã«å ããããæ¬¡ã®å€æŽãšãšãã«æž¡ããŸãã
- PyTorch/XLA FSDP ãæå¹ã«ããã«ã¯ã`xla`ã`True`ã«èšå®ããå¿
èŠããããŸãã
- `xla_fsdp_settings` å€ã¯ãXLA FSDP ã©ããã³ã° ãã©ã¡ãŒã¿ãæ ŒçŽããèŸæžã§ãã
ãªãã·ã§ã³ã®å®å
šãªãªã¹ãã«ã€ããŠã¯ã[ãã¡ã](
https://github.com/pytorch/xla/blob/master/torch_xla/distributed/fsdp/xla_full_sharded_data_Parallel.py)ã
- `xla_fsdp_grad_ckpt`ã `True`ã®å Žåããã¹ãããã XLA FSDP ã§ã©ãããããåã¬ã€ã€ãŒäžã§åŸé
ãã§ãã¯ãã€ã³ãã䜿çšããŸãã
ãã®èšå®ã¯ãxla ãã©ã°ã true ã«èšå®ãããŠãããèªåã©ããã³ã° ããªã·ãŒãæå®ãããŠããå Žåã«ã®ã¿äœ¿çšã§ããŸãã
`fsdp_min_num_params` ãŸã㯠`fsdp_transformer_layer_cls_to_wrap`ã
- ãã©ã³ã¹ãã©ãŒã㌠ããŒã¹ã®èªåã©ãã ããªã·ãŒãŸãã¯ãµã€ãº ããŒã¹ã®èªåã©ãã ããªã·ãŒã®ããããã䜿çšã§ããŸãã
- ãã©ã³ã¹ãã©ãŒããŒããŒã¹ã®èªåã©ããããªã·ãŒã®å Žåãæ§æãã¡ã€ã«ã§ `fsdp_transformer_layer_cls_to_wrap` ãæå®ããããšããå§ãããŸããæå®ããªãå Žåã䜿çšå¯èœãªå Žåãããã©ã«ãå€ã¯ `model._no_split_modules` ã«ãªããŸãã
ããã¯ãã©ãããããã©ã³ã¹ãã©ãŒããŒå±€ã¯ã©ã¹åã®ãªã¹ã (倧æåãšå°æåãåºå¥) ãæå®ããŸã (äŸ: [`BertLayer`]ã[`GPTJBlock`]ã[`T5Block`] ...)ã
éã¿ãå
±æãããµãã¢ãžã¥ãŒã« (åã蟌ã¿å±€ãªã©) ãç°ãªã FSDP ã©ããããããŠãããã«ãªããªãããã«ããå¿
èŠããããããããã¯éèŠã§ãã
ãã®ããªã·ãŒã䜿çšãããšããã«ãããã ã¢ãã³ã·ã§ã³ãšããã«ç¶ãããã€ãã® MLP ã¬ã€ã€ãŒãå«ããããã¯ããšã«ã©ããã³ã°ãçºçããŸãã
å
±æåã蟌ã¿ãå«ãæ®ãã®å±€ã¯ãåãæãå€åŽã® FSDP ãŠãããã«ã©ãããããã®ã䟿å©ã§ãã
ãããã£ãŠããã©ã³ã¹ããŒã¹ã®ã¢ãã«ã«ã¯ããã䜿çšããŠãã ããã
- ãµã€ãºããŒã¹ã®èªåã©ããããªã·ãŒã®å Žåã¯ãèšå®ãã¡ã€ã«ã«`fsdp_min_num_params`ã远å ããŠãã ããã
èªåã©ããã³ã°ã®ããã® FSDP ã®ãã©ã¡ãŒã¿ã®æå°æ°ãæå®ããŸãã
### Using Trainer for accelerated PyTorch Training on Mac
PyTorch v1.12 ãªãªãŒã¹ã«ãããéçºè
ãšç ç©¶è
㯠Apple ã·ãªã³ã³ GPU ãå©çšããŠã¢ãã« ãã¬ãŒãã³ã°ã倧å¹
ã«é«éåã§ããŸãã
ããã«ããããããã¿ã€ãã³ã°ã埮調æŽãªã©ã®æ©æ¢°åŠç¿ã¯ãŒã¯ãããŒã Mac äžã§ããŒã«ã«ã§å®è¡ã§ããããã«ãªããŸãã
PyTorch ã®ããã¯ãšã³ããšããŠã® Apple ã® Metal Performance Shaders (MPS) ã¯ãããå¯èœã«ããæ°ãã `"mps"` ããã€ã¹çµç±ã§äœ¿çšã§ããŸãã
ããã«ãããèšç®ã°ã©ããšããªããã£ãã MPS Graph ãã¬ãŒã ã¯ãŒã¯ãš MPS ã«ãã£ãŠæäŸããã調æŽãããã«ãŒãã«ã«ãããã³ã°ãããŸãã
詳现ã«ã€ããŠã¯ãå
¬åŒããã¥ã¡ã³ã [Mac ã§ã® Accelerated PyTorch Training ã®ç޹ä»](https://pytorch.org/blog/introducing-accelerated-pytorch-training-on-mac/) ãåç
§ããŠãã ããã
ããã³ [MPS ããã¯ãšã³ã](https://pytorch.org/docs/stable/notes/mps.html)ã
<Tip warning={false}>
MacOS ãã·ã³ã« PyTorch >= 1.13 (å·çæç¹ã§ã¯ãã€ããªãŒ ããŒãžã§ã³) ãã€ã³ã¹ããŒã«ããããšã匷ããå§ãããŸãã
ãã©ã³ã¹ããŒã¹ã®ã¢ãã«ã®ã¢ãã«ã®æ£ç¢ºæ§ãšããã©ãŒãã³ã¹ã®åäžã«é¢é£ããäž»èŠãªä¿®æ£ãè¡ãããŠããŸãã
詳现ã«ã€ããŠã¯ãhttps://github.com/pytorch/pytorch/issues/82707 ãåç
§ããŠãã ããã
</Tip>
**Apple Silicon ãããã䜿çšãããã¬ãŒãã³ã°ãšæšè«ã®å©ç¹**
1. ãŠãŒã¶ãŒãããŒã«ã«ã§å€§èŠæš¡ãªãããã¯ãŒã¯ãããã ãµã€ãºããã¬ãŒãã³ã°ã§ããããã«ããŸã
2. ãŠããã¡ã€ã ã¡ã¢ãª ã¢ãŒããã¯ãã£ã«ãããããŒã¿ååŸã®é
å»¶ãççž®ãããGPU ãã¡ã¢ãª ã¹ãã¢å
šäœã«çŽæ¥ã¢ã¯ã»ã¹ã§ããããã«ãªããŸãã
ãããã£ãŠããšã³ãããŒãšã³ãã®ããã©ãŒãã³ã¹ãåäžããŸãã
3. ã¯ã©ãŠãããŒã¹ã®éçºã«é¢é£ããã³ã¹ãã远å ã®ããŒã«ã« GPU ã®å¿
èŠæ§ãåæžããŸãã
**åææ¡ä»¶**: mps ãµããŒããåããããŒããã€ã³ã¹ããŒã«ããã«ã¯ã
ãã®çŽ æŽãããã¡ãã£ã¢èšäº [GPU ã¢ã¯ã»ã©ã¬ãŒã·ã§ã³ã M1 Mac ã® PyTorch ã«ç»å Ž](https://medium.com/towards-data-science/gpu-acceleration-comes-to-pytorch-on-m1-macs-195c399efcc1) ã«åŸã£ãŠãã ããã ã
**äœ¿çšæ³**ïŒ
`mps` ããã€ã¹ã¯ã`cuda` ããã€ã¹ã䜿çšãããæ¹æ³ãšåæ§ã«å©çšå¯èœãªå Žåãããã©ã«ãã§äœ¿çšãããŸãã
ãããã£ãŠããŠãŒã¶ãŒã«ããã¢ã¯ã·ã§ã³ã¯å¿
èŠãããŸããã
ããšãã°ã以äžã®ã³ãã³ãã䜿çšããŠãApple Silicon GPU ã䜿çšããŠå
¬åŒã® Glue ããã¹ãåé¡ã¿ã¹ã¯ã (ã«ãŒã ãã©ã«ããŒãã) å®è¡ã§ããŸãã
```bash
export TASK_NAME=mrpc
python examples/pytorch/text-classification/run_glue.py \
--model_name_or_path bert-base-cased \
--task_name $TASK_NAME \
--do_train \
--do_eval \
--max_seq_length 128 \
--per_device_train_batch_size 32 \
--learning_rate 2e-5 \
--num_train_epochs 3 \
--output_dir /tmp/$TASK_NAME/ \
--overwrite_output_dir
```
**泚æãã¹ãããã€ãã®æ³šæäºé
**
1. äžéšã® PyTorch æäœã¯ mps ã«å®è£
ãããŠããªãããããšã©ãŒãã¹ããŒãããŸãã
ãããåé¿ãã 1 ã€ã®æ¹æ³ã¯ãç°å¢å€æ° `PYTORCH_ENABLE_MPS_FALLBACK=1` ãèšå®ããããšã§ãã
ãããã®æäœã§ã¯ CPU ã«ãã©ãŒã«ããã¯ããŸãããã ããããã§ã UserWarning ãã¹ããŒãããŸãã
2. 忣ã»ããã¢ãã`gloo`ããã³`nccl`ã¯ã`mps`ããã€ã¹ã§ã¯åäœããŸããã
ããã¯ãçŸåšãmpsãããã€ã¹ ã¿ã€ãã®åäž GPU ã®ã¿ã䜿çšã§ããããšãæå³ããŸãã
æåŸã«ãèŠããŠãããŠãã ããã ð€ `Trainer` 㯠MPS ããã¯ãšã³ãã®ã¿ãçµ±åããããã
MPS ããã¯ãšã³ãã®äœ¿çšã«é¢ããŠåé¡ã質åãããå Žåã¯ã
[PyTorch GitHub](https://github.com/pytorch/pytorch/issues) ã«åé¡ãæåºããŠãã ããã
## Using Accelerate Launcher with Trainer
å éããŠãã¬ãŒããŒã«ãã¯ãŒãäžããŸãããããŠãŒã¶ãŒãæåŸ
ããããšã«é¢ããŠã¯ã次ã®ãšããã§ãã
- ãã¬ãŒããŒåŒæ°ã«å¯Ÿã㊠FSDPãDeepSpeed ãªã©ã®ãã¬ãŒã㌠ã€ã³ãã¬ãŒã·ã§ã³ã倿Žããã«äœ¿çšãç¶ããããšãã§ããŸãã
- ãã¬ãŒããŒã§ Accelerate Launcher ã䜿çšã§ããããã«ãªããŸãã (æšå¥š)ã
ãã¬ãŒããŒã§ Accelerate Launcher ã䜿çšããæé :
1. ð€ Accelerate ãã€ã³ã¹ããŒã«ãããŠããããšã確èªããŠãã ãããAccelerate ããªããš `Trainer` ã䜿çšããããšã¯ã§ããŸãããããã§ãªãå Žåã¯ã`pip install accelerate`ããŠãã ããã Accelerate ã®ããŒãžã§ã³ãæŽæ°ããå¿
èŠãããå ŽåããããŸã: `pip install activate --upgrade`
2. `accelerate config`ãå®è¡ããã¢ã³ã±ãŒãã«èšå
¥ããŸãã以äžã¯å éèšå®ã®äŸã§ãã
ïœïŒ DDP ãã«ãããŒã ãã«ã GPU æ§æ:
```yaml
compute_environment: LOCAL_MACHINE
distributed_type: MULTI_GPU
downcast_bf16: 'no'
gpu_ids: all
machine_rank: 0 #change rank as per the node
main_process_ip: 192.168.20.1
main_process_port: 9898
main_training_function: main
mixed_precision: fp16
num_machines: 2
num_processes: 8
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
```
b. FSDP config:
```yaml
compute_environment: LOCAL_MACHINE
distributed_type: FSDP
downcast_bf16: 'no'
fsdp_config:
fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
fsdp_backward_prefetch_policy: BACKWARD_PRE
fsdp_forward_prefetch: true
fsdp_offload_params: false
fsdp_sharding_strategy: 1
fsdp_state_dict_type: FULL_STATE_DICT
fsdp_sync_module_states: true
fsdp_transformer_layer_cls_to_wrap: BertLayer
fsdp_use_orig_params: true
machine_rank: 0
main_training_function: main
mixed_precision: bf16
num_machines: 1
num_processes: 2
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
```
c.ãã¡ã€ã«ãæã DeepSpeed æ§æ:
```yaml
compute_environment: LOCAL_MACHINE
deepspeed_config:
deepspeed_config_file: /home/user/configs/ds_zero3_config.json
zero3_init_flag: true
distributed_type: DEEPSPEED
downcast_bf16: 'no'
machine_rank: 0
main_training_function: main
num_machines: 1
num_processes: 4
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
```
d.å éãã©ã°ã€ã³ã䜿çšãã DeepSpeed æ§æ:
```yaml
compute_environment: LOCAL_MACHINE
deepspeed_config:
gradient_accumulation_steps: 1
gradient_clipping: 0.7
offload_optimizer_device: cpu
offload_param_device: cpu
zero3_init_flag: true
zero_stage: 2
distributed_type: DEEPSPEED
downcast_bf16: 'no'
machine_rank: 0
main_training_function: main
mixed_precision: bf16
num_machines: 1
num_processes: 4
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
```
3. å éèšå®ãŸãã¯ã©ã³ãã£ãŒåŒæ°ã«ãã£ãŠäžèšã§åŠçãããåŒæ°ä»¥å€ã®åŒæ°ã䜿çšããŠããã¬ãŒã㌠ã¹ã¯ãªãããå®è¡ããŸãã
以äžã¯ãäžèšã® FSDP æ§æã§`accelerate launcher`ã䜿çšããŠ`run_glue.py`ãå®è¡ããäŸã§ãã
```bash
cd transformers
accelerate launch \
./examples/pytorch/text-classification/run_glue.py \
--model_name_or_path bert-base-cased \
--task_name $TASK_NAME \
--do_train \
--do_eval \
--max_seq_length 128 \
--per_device_train_batch_size 16 \
--learning_rate 5e-5 \
--num_train_epochs 3 \
--output_dir /tmp/$TASK_NAME/ \
--overwrite_output_dir
```
4. `accelerate launch`ããããã® cmd åŒæ°ãçŽæ¥äœ¿çšããããšãã§ããŸããäžã®äŸã¯æ¬¡ã®ããã«ãããã³ã°ãããŸãã
```bash
cd transformers
accelerate launch --num_processes=2 \
--use_fsdp \
--mixed_precision=bf16 \
--fsdp_auto_wrap_policy=TRANSFORMER_BASED_WRAP \
--fsdp_transformer_layer_cls_to_wrap="BertLayer" \
--fsdp_sharding_strategy=1 \
--fsdp_state_dict_type=FULL_STATE_DICT \
./examples/pytorch/text-classification/run_glue.py
--model_name_or_path bert-base-cased \
--task_name $TASK_NAME \
--do_train \
--do_eval \
--max_seq_length 128 \
--per_device_train_batch_size 16 \
--learning_rate 5e-5 \
--num_train_epochs 3 \
--output_dir /tmp/$TASK_NAME/ \
--overwrite_output_dir
```
詳现ã«ã€ããŠã¯ãð€ Accelerate CLI ã¬ã€ããåç
§ããŠãã ãã: [ð€ Accelerate ã¹ã¯ãªããã®èµ·å](https://huggingface.co/docs/accelerate/basic_tutorials/launch)ã
ç§»åãããã»ã¯ã·ã§ã³:
[ <a href="./deepspeed#deepspeed-trainer-integration">DeepSpeed</a><a id="deepspeed"></a>
| <a href="./deepspeed#deepspeed-installation">Installation</a><a id="installation"></a>
| <a href="./deepspeed#deepspeed-multi-gpu">Deployment with multiple GPUs</a><a id="deployment-with-multiple-gpus"></a>
| <a href="./deepspeed#deepspeed-one-gpu">Deployment with one GPU</a><a id="deployment-with-one-gpu"></a>
| <a href="./deepspeed#deepspeed-notebook">Deployment in Notebooks</a><a id="deployment-in-notebooks"></a>
| <a href="./deepspeed#deepspeed-config">Configuration</a><a id="configuration"></a>
| <a href="./deepspeed#deepspeed-config-passing">Passing Configuration</a><a id="passing-configuration"></a>
| <a href="./deepspeed#deepspeed-config-shared">Shared Configuration</a><a id="shared-configuration"></a>
| <a href="./deepspeed#deepspeed-zero">ZeRO</a><a id="zero"></a>
| <a href="./deepspeed#deepspeed-zero2-config">ZeRO-2 Config</a><a id="zero-2-config"></a>
| <a href="./deepspeed#deepspeed-zero3-config">ZeRO-3 Config</a><a id="zero-3-config"></a>
| <a href="./deepspeed#deepspeed-nvme">NVMe Support</a><a id="nvme-support"></a>
| <a href="./deepspeed#deepspeed-zero2-zero3-performance">ZeRO-2 vs ZeRO-3 Performance</a><a id="zero-2-vs-zero-3-performance"></a>
| <a href="./deepspeed#deepspeed-zero2-example">ZeRO-2 Example</a><a id="zero-2-example"></a>
| <a href="./deepspeed#deepspeed-zero3-example">ZeRO-3 Example</a><a id="zero-3-example"></a>
| <a href="./deepspeed#deepspeed-optimizer">Optimizer</a><a id="optimizer"></a>
| <a href="./deepspeed#deepspeed-scheduler">Scheduler</a><a id="scheduler"></a>
| <a href="./deepspeed#deepspeed-fp32">fp32 Precision</a><a id="fp32-precision"></a>
| <a href="./deepspeed#deepspeed-amp">Automatic Mixed Precision</a><a id="automatic-mixed-precision"></a>
| <a href="./deepspeed#deepspeed-bs">Batch Size</a><a id="batch-size"></a>
| <a href="./deepspeed#deepspeed-grad-acc">Gradient Accumulation</a><a id="gradient-accumulation"></a>
| <a href="./deepspeed#deepspeed-grad-clip">Gradient Clipping</a><a id="gradient-clipping"></a>
| <a href="./deepspeed#deepspeed-weight-extraction">Getting The Model Weights Out</a><a id="getting-the-model-weights-out"></a>
]
| 0 |
hf_public_repos/transformers/docs/source/ja | hf_public_repos/transformers/docs/source/ja/main_classes/data_collator.md | <!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# ããŒã¿ç
§åè
ããŒã¿ç
§ååšã¯ãããŒã¿ã»ããèŠçŽ ã®ãªã¹ããå
¥åãšããŠäœ¿çšããŠãããã圢æãããªããžã§ã¯ãã§ãããããã®èŠçŽ ã¯ã
`train_dataset` ãŸã㯠`eval_dataset` ã®èŠçŽ ãšåãåã
ããããæ§ç¯ã§ããããã«ããããã«ãããŒã¿ç
§åè
ã¯äœããã®åŠç (ããã£ã³ã°ãªã©) ãé©çšããå ŽåããããŸãããã®ãã¡ã®ããã€ãã¯ïŒ
[`DataCollatââorForLanguageModeling`]) ã©ã³ãã ãªããŒã¿æ¡åŒµ (ã©ã³ãã ãã¹ãã³ã°ãªã©) ãé©çšããŸã
圢æããããããäžã§ã
䜿çšäŸã¯ã[ãµã³ãã« ã¹ã¯ãªãã](../examples) ãŸã㯠[ãµã³ãã« ããŒãããã¯](../notebooks) ã«ãããŸãã
## Default data collator
[[autodoc]] data.data_collator.default_data_collator
## DefaultDataCollator
[[autodoc]] data.data_collator.DefaultDataCollator
## DataCollatorWithPadding
[[autodoc]] data.data_collator.DataCollatorWithPadding
## DataCollatorForTokenClassification
[[autodoc]] data.data_collator.DataCollatorForTokenClassification
## DataCollatorForSeq2Seq
[[autodoc]] data.data_collator.DataCollatorForSeq2Seq
## DataCollatorForLanguageModeling
[[autodoc]] data.data_collator.DataCollatorForLanguageModeling
- numpy_mask_tokens
- tf_mask_tokens
- torch_mask_tokens
## DataCollatorForWholeWordMask
[[autodoc]] data.data_collator.DataCollatorForWholeWordMask
- numpy_mask_tokens
- tf_mask_tokens
- torch_mask_tokens
## DataCollatorForPermutationLanguageModeling
[[autodoc]] data.data_collator.DataCollatorForPermutationLanguageModeling
- numpy_mask_tokens
- tf_mask_tokens
- torch_mask_tokens
| 0 |
hf_public_repos/transformers/docs/source/ja | hf_public_repos/transformers/docs/source/ja/main_classes/deepspeed.md | <!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# DeepSpeed Integration
[DeepSpeed](https://github.com/microsoft/DeepSpeed) ã¯ã[ZeRO è«æ](https://arxiv.org/abs/1910.02054) ã§èª¬æãããŠãããã¹ãŠãå®è£
ããŸããçŸåšã次ã®ãã®ãå®å
šã«ãµããŒãããŠããŸãã
1. ãªããã£ãã€ã¶ãŒã®ç¶æ
åå² (ZeRO ã¹ããŒãž 1)
2. åŸé
åå² (ZeRO ã¹ããŒãž 2)
3. ãã©ã¡ãŒã¿ãŒã®åå² (ZeRO ã¹ããŒãž 3)
4. ã«ã¹ã¿ã æ··å粟床ãã¬ãŒãã³ã°åŠç
5. äžé£ã®é«é CUDA æ¡åŒµããŒã¹ã®ãªããã£ãã€ã¶ãŒ
6. CPU ããã³ NVMe ãžã® ZeRO ãªãããŒã
ZeRO-Offload ã«ã¯ç¬èªã®å°çšããŒããŒããããŸã: [ZeRO-Offload: Democratizing Billion-Scale Model Training](https://arxiv.org/abs/2101.06840)ã NVMe ãµããŒãã«ã€ããŠã¯ãè«æ [ZeRO-Infinity: Breaking the GPU Memory Wall for Extreme Scale Deep Learning](https://arxiv.org/abs/2104.07857)ã
DeepSpeed ZeRO-2 ã¯ããã®æ©èœãæšè«ã«ã¯åœ¹ã«ç«ããªããããäž»ã«ãã¬ãŒãã³ã°ã®ã¿ã«äœ¿çšãããŸãã
DeepSpeed ZeRO-3 ã¯ã巚倧ãªã¢ãã«ãè€æ°ã® GPU ã«ããŒãã§ãããããæšè«ã«ã䜿çšã§ããŸãã
åäžã® GPU ã§ã¯äžå¯èœã§ãã
ð€ Transformers ã¯ã2 ã€ã®ãªãã·ã§ã³ãä»ã㊠[DeepSpeed](https://github.com/microsoft/DeepSpeed) ãçµ±åããŸãã
1. [`Trainer`] ã«ããã³ã¢ DeepSpeed æ©èœã®çµ±åãäœã§ããã£ãŠãããã¿ã€ãã§ã
çµ±åã®å Žå - ã«ã¹ã¿ã æ§æãã¡ã€ã«ãæå®ãããããã³ãã¬ãŒãã䜿çšããã ãã§ãä»ã«äœãããå¿
èŠã¯ãããŸããããããŠãã®
ãã®ããã¥ã¡ã³ãã§ã¯ãã®æ©èœã«çŠç¹ãåœãŠãŠããŸãã
2. [`Trainer`] ã䜿çšãããDeepSpeed ãçµ±åããç¬èªã®ãã¬ãŒããŒã䜿çšãããå Žå
`from_pretrained` ã `from_config` ãªã©ã®ã³ã¢æ©èœã«ã¯ãéèŠãªæ©èœã®çµ±åãå«ãŸããŠããŸãã
ZeRO ã¹ããŒãž 3 以éã® `zero.Init`ãªã©ã® DeepSpeed ã®éšåããã®æ©èœã掻çšããã«ã¯ã次ã®ããã¥ã¡ã³ãããèªã¿ãã ããã
[éãã¬ãŒã㌠DeepSpeed çµ±å](#nontrainer-deepspeed-integration)ã
çµ±åãããŠãããã®:
ãã¬ãŒãã³ã°ïŒ
1. DeepSpeed ZeRO ãã¬ãŒãã³ã°ã¯ãZeRO-Infinity (CPU ããã³ NVME ãªãããŒã) ã䜿çšããŠå®å
šãª ZeRO ã¹ããŒãž 1ã2ãããã³ 3 ããµããŒãããŸãã
æšè«ïŒ
1. DeepSpeed ZeRO Inference ã¯ãZeRO-Infinity ã«ãã ZeRO ã¹ããŒãž 3 ããµããŒãããŸãããã¬ãŒãã³ã°ãšåã ZeRO ãããã³ã«ã䜿çšããŸããã
ãªããã£ãã€ã¶ãš lr ã¹ã±ãžã¥ãŒã©ã¯äœ¿çšãããã¹ããŒãž 3 ã®ã¿ãé¢é£ããŸãã詳现ã«ã€ããŠã¯ã以äžãåç
§ããŠãã ããã
[ãŒãæšè«](#zero-inference)ã
DeepSpeed Inference ããããŸããããã¯ãTensor Parallelism ã®ä»£ããã« Tensor Parallelism ã䜿çšãããŸã£ããç°ãªããã¯ãããžãŒã§ãã
ZeRO (è¿æ¥å
¬é)ã
<a id='deepspeed-trainer-integration'></a>
## Trainer Deepspeed Integration
<a id='deepspeed-installation'></a>
### Installation
pypi çµç±ã§ã©ã€ãã©ãªãã€ã³ã¹ããŒã«ããŸãã
```bash
pip install deepspeed
```
ãŸãã¯`tansformers`, `extras`çµç±:
```bash
pip install transformers[deepspeed]
```
ãŸãã¯ã[DeepSpeed ã® GitHub ããŒãž](https://github.com/microsoft/deepspeed#installation) ã§è©³çްã確èªããŠãã ããã
[é«åºŠãªã€ã³ã¹ããŒã«](https://www.deepspeed.ai/tutorials/advanced-install/)ã
ããã§ããã«ãã«èŠåŽããå Žåã¯ããŸã [CUDA æ¡åŒµæ©èœã®ã€ã³ã¹ããŒã« ããŒã](trainer#cuda-extension-installation-notes) ãå¿
ãèªãã§ãã ããã
æ¡åŒµæ©èœãäºåãã«ããããå®è¡æã«æ¡åŒµæ©èœããã«ããããããšã«äŸåããŠãããäžèšã®è§£æ±ºçããã¹ãŠè©Šããå Žå
ããã圹ã«ç«ããªãã£ãå Žåãæ¬¡ã«è©Šãã¹ãããšã¯ãã¢ãžã¥ãŒã«ãã€ã³ã¹ããŒã«ããåã«ã¢ãžã¥ãŒã«ãäºåã«ãã«ãããããšã§ãã
DeepSpeed ã®ããŒã«ã« ãã«ããäœæããã«ã¯:
```bash
git clone https://github.com/microsoft/DeepSpeed/
cd DeepSpeed
rm -rf build
TORCH_CUDA_ARCH_LIST="8.6" DS_BUILD_CPU_ADAM=1 DS_BUILD_UTILS=1 pip install . \
--global-option="build_ext" --global-option="-j8" --no-cache -v \
--disable-pip-version-check 2>&1 | tee build.log
```
NVMe ãªãããŒãã䜿çšããå Žåã¯ãäžèšã®æé ã«`DS_BUILD_AIO=1`ãå«ããå¿
èŠããããŸã (ãŸãã
*libaio-dev* ã·ã¹ãã å
šäœã«ã€ã³ã¹ããŒã«ããŸã)ã
`TORCH_CUDA_ARCH_LIST` ãç·šéããŠã䜿çšãã GPU ã«ãŒãã®ã¢ãŒããã¯ãã£ã®ã³ãŒããæ¿å
¥ããŸãããã¹ãŠãä»®å®ãããš
ããªãã®ã«ãŒãã¯åãã§ãæ¬¡ã®æ¹æ³ã§ã¢ãŒããååŸã§ããŸãã
```bash
CUDA_VISIBLE_DEVICES=0 python -c "import torch; print(torch.cuda.get_device_capability())"
```
ãããã£ãŠã`8, 6`ãååŸããå Žåã¯ã`TORCH_CUDA_ARCH_LIST="8.6"`ã䜿çšããŸããè€æ°ã®ç°ãªãã«ãŒãããæã¡ã®å Žåã¯ããã¹ãŠããªã¹ãããããšãã§ããŸã
ãããã®ãã¡ã`TORCH_CUDA_ARCH_LIST="6.1;8.6"`ã奜ãã§ã
è€æ°ã®ãã·ã³ã§åãã»ããã¢ããã䜿çšããå¿
èŠãããå Žåã¯ããã€ã㪠ãã€ãŒã«ãäœæããŸãã
```bash
git clone https://github.com/microsoft/DeepSpeed/
cd DeepSpeed
rm -rf build
TORCH_CUDA_ARCH_LIST="8.6" DS_BUILD_CPU_ADAM=1 DS_BUILD_UTILS=1 \
python setup.py build_ext -j8 bdist_wheel
```
`dist/deepspeed-0.3.13+8cd046f-cp38-cp38-linux_x86_64.whl`ã®ãããªãã®ãçæãããã®ã§ããããã€ã³ã¹ããŒã«ã§ããŸã
`pip install deepspeed-0.3.13+8cd046f-cp38-cp38-linux_x86_64.whl`ãšããŠããŒã«ã«ãŸãã¯ä»ã®ãã·ã³ã«ã€ã³ã¹ããŒã«ããŸãã
ç¹°ãè¿ããŸããã`TORCH_CUDA_ARCH_LIST`ãã¿ãŒã²ãã ã¢ãŒããã¯ãã£ã«åãããŠèª¿æŽããããšãå¿ããªãã§ãã ããã
NVIDIA GPU ã®å®å
šãªãªã¹ããšãããã«å¯Ÿå¿ãã **ã³ã³ãã¥ãŒãã£ã³ã°æ©èœ** (ãã®èšäºã® Arch ãšåã) ãèŠã€ããããšãã§ããŸãã
ã³ã³ããã¹ã) [ãã](https://developer.nvidia.com/cuda-gpus)ã
以äžã䜿çšããŠãpytorch ãæ§ç¯ãããã¢ãŒãã確èªã§ããŸãã
```bash
python -c "import torch; print(torch.cuda.get_arch_list())"
```
ããã§ã¯ãã€ã³ã¹ããŒã«ãããŠãã GPU ã® 1 ã€ã®ã¢ãŒããèŠã€ããæ¹æ³ã説æããŸããããšãã°ãGPU 0 ã®å Žå:
```bash
CUDA_VISIBLE_DEVICES=0 python -c "import torch; \
print(torch.cuda.get_device_properties(torch.device('cuda')))"
```
åºåãæ¬¡ã®å Žå:
```bash
_CudaDeviceProperties(name='GeForce RTX 3090', major=8, minor=6, total_memory=24268MB, multi_processor_count=82)
```
ããããã°ããã®ã«ãŒãã®ã¢ãŒãã`8.6`ã§ããããšãããããŸãã
`TORCH_CUDA_ARCH_LIST` ãå®å
šã«çç¥ããããšãã§ããŸããããããã°ããã«ã ããã°ã©ã ãèªåçã«ã¯ãšãªãå®è¡ããŸãã
ãã«ããè¡ããã GPU ã®ã¢ãŒããã¯ãã£ãããã¯ãã¿ãŒã²ãã ãã·ã³ã® GPU ãšäžèŽããå Žåãããã°ãäžèŽããªãå ŽåããããŸãã
ç®çã®ã¢ãŒããæç€ºçã«æå®ããããšããå§ãããŸãã
ææ¡ãããããšããã¹ãŠè©ŠããŠããŸã ãã«ãã®åé¡ãçºçããå Žåã¯ãGitHub ã®åé¡ã«é²ãã§ãã ããã
[ãã£ãŒãã¹ããŒã](https://github.com/microsoft/DeepSpeed/issues)ã
<a id='deepspeed-multi-gpu'></a>
### Deployment with multiple GPUs
DeepSpeed çµ±åããããã€ããã«ã¯ã[`Trainer`] ã³ãã³ã ã©ã€ã³åŒæ°ã調æŽããŠæ°ããåŒæ° `--deepspeed ds_config.json` ãå«ããŸããããã§ã`ds_config.json` 㯠DeepSpeed æ§æãã¡ã€ã«ã§ãã
[ãã¡ã](https://www.deepspeed.ai/docs/config-json/)ã«èšèŒãããŠããŸãããã¡ã€ã«åã¯ããªã次第ã§ãã
DeepSpeed ã®`add_config_arguments`ãŠãŒãã£ãªãã£ã䜿çšããŠãå¿
èŠãªã³ãã³ã ã©ã€ã³åŒæ°ãã³ãŒãã«è¿œå ããããšããå§ãããŸãã
詳现ã«ã€ããŠã¯ã[DeepSpeed ã®åŒæ°è§£æ](https://deepspeed.readthedocs.io/en/latest/initialize.html#argument-parsing) ããã¥ã¡ã³ããåç
§ããŠãã ããã
ããã§éžæããã©ã³ãã£ãŒã䜿çšã§ããŸãã pytorch ã©ã³ãã£ãŒãåŒãç¶ã䜿çšã§ããŸãã
```bash
torch.distributed.run --nproc_per_node=2 your_program.py <normal cl args> --deepspeed ds_config.json
```
ãŸãã¯ã`deepspeed`ã«ãã£ãŠæäŸãããã©ã³ãã£ãŒã䜿çšããŸãã
```bash
deepspeed --num_gpus=2 your_program.py <normal cl args> --deepspeed ds_config.json
```
ã芧ã®ãšãããåŒæ°ã¯åãã§ã¯ãããŸããããã»ãšãã©ã®ããŒãºã§ã¯ã©ã¡ãã§ãæ©èœããŸããã®
ããŸããŸãªããŒããš GPU ãæ§æããæ¹æ³ã®è©³çްã«ã€ããŠã¯ã[ãã¡ã](https://www.deepspeed.ai/getting-started/#resource-configuration-multi-node) ãåç
§ããŠãã ããã
`deepspeed`ã©ã³ãã£ãŒã䜿çšããå©çšå¯èœãªãã¹ãŠã® GPU ã䜿çšãããå Žåã¯ã`--num_gpus`ãã©ã°ãçç¥ããã ãã§ãã
以äžã¯ãå©çšå¯èœãªãã¹ãŠã® GPU ããããã€ãã DeepSpeed ã§`run_translation.py`ãå®è¡ããäŸã§ãã
```bash
deepspeed examples/pytorch/translation/run_translation.py \
--deepspeed tests/deepspeed/ds_config_zero3.json \
--model_name_or_path t5-small --per_device_train_batch_size 1 \
--output_dir output_dir --overwrite_output_dir --fp16 \
--do_train --max_train_samples 500 --num_train_epochs 1 \
--dataset_name wmt16 --dataset_config "ro-en" \
--source_lang en --target_lang ro
```
DeepSpeed ã®ããã¥ã¡ã³ãã«ã¯ã`--deepspeed --deepspeed_config ds_config.json`ã衚瀺ãããå¯èœæ§ãé«ãããšã«æ³šæããŠãã ããã
DeepSpeed é¢é£ã®åŒæ°ã 2 ã€ãããŸãããç°¡åã«ããããã§ãããåŠçãã¹ãåŒæ°ããã§ã«éåžžã«å€ãããã§ãã
ãã® 2 ã€ã 1 ã€ã®åŒæ°ã«çµåããŸããã
å®éã®äœ¿çšäŸã«ã€ããŠã¯ããã® [æçš¿](https://github.com/huggingface/transformers/issues/8771#issuecomment-759248400) ãåç
§ããŠãã ããã
<a id='deepspeed-one-gpu'></a>
### Deployment with one GPU
1 ã€ã® GPU ã§ DeepSpeed ããããã€ããã«ã¯ã[`Trainer`] ã³ãã³ã ã©ã€ã³åŒæ°ã次ã®ããã«èª¿æŽããŸãã
```bash
deepspeed --num_gpus=1 examples/pytorch/translation/run_translation.py \
--deepspeed tests/deepspeed/ds_config_zero2.json \
--model_name_or_path t5-small --per_device_train_batch_size 1 \
--output_dir output_dir --overwrite_output_dir --fp16 \
--do_train --max_train_samples 500 --num_train_epochs 1 \
--dataset_name wmt16 --dataset_config "ro-en" \
--source_lang en --target_lang ro
```
ããã¯è€æ°ã® GPU ã®å Žåãšã»ãŒåãã§ãããããã§ã¯ãDeepSpeed ã« 1 ã€ã® GPU ã ãã䜿çšããããã«æç€ºçã«æç€ºããŸãã
`--num_gpus=1`ãããã©ã«ãã§ã¯ãDeepSpeed ã¯æå®ãããããŒãäžã§èªèã§ãããã¹ãŠã® GPU ããããã€ããŸããèµ·åãã GPU ã 1 ã€ã ãã®å Žå
ã®å Žåããã®åŒæ°ã¯å¿
èŠãããŸãããæ¬¡ã® [ããã¥ã¡ã³ã](https://www.deepspeed.ai/getting-started/#resource-configuration-multi-node) ã§ã¯ãã©ã³ãã£ãŒ ãªãã·ã§ã³ã«ã€ããŠèª¬æããŠããŸãã
1 ã€ã® GPU ã ãã§ DeepSpeed ã䜿çšãããã®ã¯ãªãã§ãã?
1. äžéšã®èšç®ãšã¡ã¢ãªããã¹ãã® CPU ãš RAM ã«å§ä»»ã§ãã ZeRO ãªãããŒãæ©èœãåããŠããããã
ã¢ãã«ã®ããŒãºã«åãããŠããå€ãã® GPU ãªãœãŒã¹ãæ®ããŠãããŸãããã倧ããªããã ãµã€ãºããŸãã¯éåžžã«å€§ããªã¢ãã«ã®ãã£ããã£ã³ã°ãå¯èœã«ãã
æ®éã¯åããªãã§ãããã
2. ã¹ããŒã㪠GPU ã¡ã¢ãªç®¡çã·ã¹ãã ãæäŸããã¡ã¢ãªã®æçåãæå°éã«æããŸãã
ãã倧ããªã¢ãã«ãšããŒã¿ ãããã
æ¬¡ã«æ§æã«ã€ããŠè©³ãã説æããŸãããåäžã® GPU ã§å€§å¹
ãªæ¹åãå®çŸããããã®éµã¯æ¬¡ã®ãšããã§ãã
DeepSpeed ã䜿çšããã«ã¯ãæ§æãã¡ã€ã«ã«å°ãªããšãæ¬¡ã®æ§æãå¿
èŠã§ãã
```json
{
"zero_optimization": {
"stage": 2,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"allgather_partitions": true,
"allgather_bucket_size": 2e8,
"reduce_scatter": true,
"reduce_bucket_size": 2e8,
"overlap_comm": true,
"contiguous_gradients": true
}
}
```
ããã«ããããªããã£ãã€ã¶ãŒã®ãªãããŒãããã®ä»ã®éèŠãªæ©èœãæå¹ã«ãªããŸãããããã¡ ãµã€ãºã詊ããŠã¿ããšããã§ãããã
詳现ã«ã€ããŠã¯ã以äžã®ãã£ã¹ã«ãã·ã§ã³ãåç
§ããŠãã ããã
ãã®ã¿ã€ãã®ãããã€ã¡ã³ãã®å®éçãªäœ¿çšäŸã«ã€ããŠã¯ããã® [æçš¿](https://github.com/huggingface/transformers/issues/8771#issuecomment-759176685) ãåç
§ããŠãã ããã
ãã®ããã¥ã¡ã³ãã§è©³ãã説æãããŠããããã«ãCPU ããã³ NVMe ãªãããŒããåãã ZeRO-3 ã詊ãããšãã§ããŸãã
ããŒãïŒ
- GPU 0 ãšã¯ç°ãªãç¹å®ã® GPU ã§å®è¡ããå¿
èŠãããå Žåã`CUDA_VISIBLE_DEVICES` ã䜿çšããŠå¶éããããšã¯ã§ããŸããã
å©çšå¯èœãª GPU ã®è¡šç€ºç¯å²ã代ããã«ãæ¬¡ã®æ§æã䜿çšããå¿
èŠããããŸãã
```bash
deepspeed --include localhost:1 examples/pytorch/translation/run_translation.py ...
```
ãã®äŸã§ã¯ãDeepSpeed ã« GPU 1 (2 çªç®ã® GPU) ã䜿çšããããã«æç€ºããŸãã
<a id='deepspeed-multi-node'></a>
### è€æ°ã®ããŒãã䜿çšãããããã€ã¡ã³ã
ãã®ã»ã¯ã·ã§ã³ã®æ
å ±ã¯ DeepSpeed çµ±åã«åºæã®ãã®ã§ã¯ãªãããããããã«ãããŒã ããã°ã©ã ã«é©çšã§ããŸãããã ããDeepSpeed ã¯ãSLURM ç°å¢ã§ãªãéããä»ã®ã©ã³ãã£ãŒããã䜿ãããã`deepspeed`ã©ã³ãã£ãŒãæäŸããŸãã
ãã®ã»ã¯ã·ã§ã³ã§ã¯ããããã 8 GPU ãåãã 2 ã€ã®ããŒãããããšä»®å®ããŸãããŸããæåã®ããŒãã«ã¯ `ssh hostname1` ã䜿çšããŠã2 çªç®ã®ããŒãã«ã¯ `ssh hostname2` ã䜿çšããŠæ¥ç¶ã§ããŸããäž¡æ¹ãšããã¹ã¯ãŒããªãã§ããŒã«ã«ã® ssh çµç±ã§çžäºã«æ¥ç¶ã§ããå¿
èŠããããŸãããã¡ããããããã®ãã¹ã (ããŒã) åããäœæ¥ããŠããå®éã®ãã¹ãåã«å€æŽããå¿
èŠããããŸãã
#### The torch.distributed.run launcher
ããšãã°ã`torch.distributed.run` ã䜿çšããã«ã¯ã次ã®ããã«ããŸãã
```bash
python -m torch.distributed.run --nproc_per_node=8 --nnode=2 --node_rank=0 --master_addr=hostname1 \
--master_port=9901 your_program.py <normal cl args> --deepspeed ds_config.json
```
åããŒãã« SSH ã§æ¥ç¶ããããããã®ããŒãã§åãã³ãã³ããå®è¡ããå¿
èŠããããŸããæ¥ãå¿
èŠã¯ãããŸãããã©ã³ãã£ãŒã¯äž¡æ¹ã®ããŒããåæãããŸã§åŸ
æ©ããŸãã
詳现ã«ã€ããŠã¯ã[torchrun](https://pytorch.org/docs/stable/elastic/run.html) ãåç
§ããŠãã ãããã¡ãªã¿ã«ããã㯠pytorch ã®æ°ããŒãžã§ã³åã®`torch.distributed.launch`ã眮ãæããã©ã³ãã£ãŒã§ããããŸãã
#### ãã£ãŒãã¹ããŒã ã©ã³ãã£ãŒ
代ããã«`deepspeed`ã©ã³ãã£ãŒã䜿çšããã«ã¯ããŸã`hostfile`ãã¡ã€ã«ãäœæããå¿
èŠããããŸãã
```
hostname1 slots=8
hostname2 slots=8
```
ãããŠã次ã®ããã«èµ·åã§ããŸãã
```bash
deepspeed --num_gpus 8 --num_nodes 2 --hostfile hostfile --master_addr hostname1 --master_port=9901 \
your_program.py <normal cl args> --deepspeed ds_config.json
```
`torch.distributed.run`ã©ã³ãã£ãŒãšã¯ç°ãªãã`deepspeed`ã¯äž¡æ¹ã®ããŒãã§ãã®ã³ãã³ããèªåçã«èµ·åããŸãã
詳现ã«ã€ããŠã¯ã[ãªãœãŒã¹æ§æ (ãã«ãããŒã)](https://www.deepspeed.ai/getting-started/#resource-configuration-multi-node) ãåç
§ããŠãã ããã
#### Launching in a SLURM environment
SLURM ç°å¢ã§ã¯ã次ã®ã¢ãããŒãã䜿çšã§ããŸãã以äžã¯ãç¹å®ã® SLURM ç°å¢ã«é©åãããããã«å¿
èŠãª slurm ã¹ã¯ãªãã `launch.slurm` ã§ãã
```bash
#SBATCH --job-name=test-nodes # name
#SBATCH --nodes=2 # nodes
#SBATCH --ntasks-per-node=1 # crucial - only 1 task per dist per node!
#SBATCH --cpus-per-task=10 # number of cores per tasks
#SBATCH --gres=gpu:8 # number of gpus
#SBATCH --time 20:00:00 # maximum execution time (HH:MM:SS)
#SBATCH --output=%x-%j.out # output file name
export GPUS_PER_NODE=8
export MASTER_ADDR=$(scontrol show hostnames $SLURM_JOB_NODELIST | head -n 1)
export MASTER_PORT=9901
srun --jobid $SLURM_JOBID bash -c 'python -m torch.distributed.run \
--nproc_per_node $GPUS_PER_NODE --nnodes $SLURM_NNODES --node_rank $SLURM_PROCID \
--master_addr $MASTER_ADDR --master_port $MASTER_PORT \
your_program.py <normal cl args> --deepspeed ds_config.json'
```
ããšã¯å®è¡ãã¹ã±ãžã¥ãŒã«ããã ãã§ãã
```bash
sbatch launch.slurm
```
#### Use of Non-shared filesystem
ããã©ã«ãã§ã¯ãDeepSpeed ã¯ãã«ãããŒãç°å¢ãå
±æã¹ãã¬ãŒãžã䜿çšããããšãæ³å®ããŠããŸãããããåœãŠã¯ãŸãããåããŒããããŒã«ã« ãã¡ã€ã«ã·ã¹ãã ããåç
§ã§ããªãå Žåã¯ãèšå®ãã¡ã€ã«ã調æŽã㊠[`checkpoint`_section](https://www.deepspeed.ai/docs/config-json/#) ãå«ããå¿
èŠããããŸãããã§ãã¯ãã€ã³ã ãªãã·ã§ã³) ãæ¬¡ã®èšå®ã§æå®ããŸãã
```json
{
"checkpoint": {
"use_node_local_storage": true
}
}
```
ãããã¯ã[`Trainer`] ã® `--save_on_each_node` åŒæ°ã䜿çšããããšãã§ããäžèšã®èšå®ã¯èªåçã«è¿œå ãããŸãã
<a id='deepspeed-notebook'></a>
### Deployment in Notebooks
ããŒãããã¯ã®ã»ã«ãã¹ã¯ãªãããšããŠå®è¡ããå Žåã®åé¡ã¯ãäŸåããéåžžã®`deepspeed`ã©ã³ãã£ãŒããªãããšã§ãã
ç¹å®ã®èšå®ã§ã¯ãããããšãã¥ã¬ãŒãããå¿
èŠããããŸãã
GPU ã 1 ã€ã ã䜿çšããŠããå ŽåãDeepSpeed ã䜿çšããããã«ããŒãããã¯å
ã®ãã¬ãŒãã³ã° ã³ãŒãã調æŽããå¿
èŠãããæ¹æ³ã¯æ¬¡ã®ãšããã§ãã
```python
# DeepSpeed requires a distributed environment even when only one process is used.
# This emulates a launcher in the notebook
import os
os.environ["MASTER_ADDR"] = "localhost"
os.environ["MASTER_PORT"] = "9994" # modify if RuntimeError: Address already in use
os.environ["RANK"] = "0"
os.environ["LOCAL_RANK"] = "0"
os.environ["WORLD_SIZE"] = "1"
# Now proceed as normal, plus pass the deepspeed config file
training_args = TrainingArguments(..., deepspeed="ds_config_zero3.json")
trainer = Trainer(...)
trainer.train()
```
泚: `...` ã¯ã颿°ã«æž¡ãéåžžã®åŒæ°ã衚ããŸãã
è€æ°ã® GPU ã䜿çšããå ŽåãDeepSpeed ãåäœããã«ã¯ãã«ãããã»ã¹ç°å¢ã䜿çšããå¿
èŠããããŸããã€ãŸããããªãã¯æã£ãŠããŸã
ãã®ç®çã§ã©ã³ãã£ãŒã䜿çšããããšã¯ã§ããŸããããããã¯ãæç€ºããã忣ç°å¢ããšãã¥ã¬ãŒãããããšã«ãã£ãŠã¯å®çŸã§ããŸããã
ãã®ã»ã¯ã·ã§ã³ã®åé ã§ã
çŸåšã®ãã£ã¬ã¯ããªã®ããŒãããã¯ã«ãã®å Žã§æ§æãã¡ã€ã«ãäœæãããå Žåã¯ãå°çšã®
ã»ã«ã®å
容:
```python no-style
%%bash
cat <<'EOT' > ds_config_zero3.json
{
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"betas": "auto",
"eps": "auto",
"weight_decay": "auto"
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto"
}
},
"zero_optimization": {
"stage": 3,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"offload_param": {
"device": "cpu",
"pin_memory": true
},
"overlap_comm": true,
"contiguous_gradients": true,
"sub_group_size": 1e9,
"reduce_bucket_size": "auto",
"stage3_prefetch_bucket_size": "auto",
"stage3_param_persistence_threshold": "auto",
"stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9,
"stage3_gather_16bit_weights_on_model_save": true
},
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"steps_per_print": 2000,
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false
}
EOT
```
ãã¬ãŒãã³ã° ã¹ã¯ãªãããããŒãããã¯ã®ã»ã«ã§ã¯ãªãéåžžã®ãã¡ã€ã«ã«ããå Žåã¯ã次ã®ããã«ããŠ`deepspeed`ãéåžžã©ããèµ·åã§ããŸãã
现èããã®ã·ã§ã«ãããšãã°ã`run_translation.py` ã䜿çšããã«ã¯ã次ã®ããã«èµ·åããŸãã
```python no-style
!git clone https://github.com/huggingface/transformers
!cd transformers; deepspeed examples/pytorch/translation/run_translation.py ...
```
ãŸãã¯ã`%%bash` ããžãã¯ã䜿çšãããšãã·ã§ã« ããã°ã©ã ãå®è¡ããããã®è€æ°è¡ã®ã³ãŒããèšè¿°ããããšãã§ããŸãã
```python no-style
%%bash
git clone https://github.com/huggingface/transformers
cd transformers
deepspeed examples/pytorch/translation/run_translation.py ...
```
ãã®ãããªå Žåããã®ã»ã¯ã·ã§ã³ã®æåã«ç€ºããã³ãŒãã¯å¿
èŠãããŸããã
泚: `%%bash` ããžãã¯ã¯åªããŠããŸãããçŸæç¹ã§ã¯åºåããããã¡ãªã³ã°ãããããããã»ã¹ãçµäºãããŸã§ãã°ã¯è¡šç€ºãããŸããã
å®äºããŸãã
<a id='deepspeed-config'></a>
### Configuration
èšå®ãã¡ã€ã«ã§äœ¿çšã§ãã DeepSpeed èšå®ãªãã·ã§ã³ã®å®å
šãªã¬ã€ãã«ã€ããŠã¯ã次ãåç
§ããŠãã ããã
[次ã®ããã¥ã¡ã³ã](https://www.deepspeed.ai/docs/config-json/) ã«ã¢ã¯ã»ã¹ããŠãã ããã
ããŸããŸãªå®éã®ããŒãºã«å¯Ÿå¿ããæ°åã® DeepSpeed æ§æäŸã [DeepSpeedExamples] (https://github.com/microsoft/DeepSpeedExamples)ã§èŠã€ããããšãã§ããŸãã
ãªããžããª:
```bash
git clone https://github.com/microsoft/DeepSpeedExamples
cd DeepSpeedExamples
find . -name '*json'
```
äžèšã®ã³ãŒããç¶ããŠãLamb ãªããã£ãã€ã¶ãŒãæ§æããããšããŠãããšããŸãããããã£ãŠã次ã®äžããæ€çŽ¢ã§ããŸã
`.json` ãã¡ã€ã«ã®äŸ:
```bash
grep -i Lamb $(find . -name '*json')
```
ããã«ããã€ãã®äŸã [ã¡ã€ã³ ãªããžããª](https://github.com/microsoft/DeepSpeed) ã«ããããŸãã
DeepSpeed ã䜿çšããå Žåã¯ãåžžã« DeepSpeed æ§æãã¡ã€ã«ãæå®ããå¿
èŠããããŸãããäžéšã®æ§æãã©ã¡ãŒã¿ã«ã¯
ã³ãã³ãã©ã€ã³çµç±ã§èšå®ããŸãã埮åŠãªéãã«ã€ããŠã¯ããã®ã¬ã€ãã®æ®ãã®éšåã§èª¬æããŸãã
DeepSpeed æ§æãã¡ã€ã«ãã©ã®ãããªãã®ããçè§£ããããã«ãZeRO ã¹ããŒãž 2 æ©èœãæå¹ã«ããæ§æãã¡ã€ã«ã次ã«ç€ºããŸãã
ãªããã£ãã€ã¶ãŒç¶æ
ã® CPU ãªãããŒããå«ã¿ã`AdamW`ãªããã£ãã€ã¶ãŒãš`WarmupLR`ã¹ã±ãžã¥ãŒã©ãŒã䜿çšããæ··åãæå¹ã«ããŸãã
`--fp16` ãæž¡ãããå Žåã®ç²ŸåºŠãã¬ãŒãã³ã°:
```json
{
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"betas": "auto",
"eps": "auto",
"weight_decay": "auto"
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto"
}
},
"zero_optimization": {
"stage": 2,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"allgather_partitions": true,
"allgather_bucket_size": 2e8,
"overlap_comm": true,
"reduce_scatter": true,
"reduce_bucket_size": 2e8,
"contiguous_gradients": true
},
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
}
```
ããã°ã©ã ãå®è¡ãããšãDeepSpeed 㯠[`Trainer`] ããåãåã£ãèšå®ããã°ã«èšé²ããŸãã
ã³ã³ãœãŒã«ã«æž¡ããããããæçµçã«ã©ã®ãããªèšå®ãæž¡ãããã®ããæ£ç¢ºã«ç¢ºèªã§ããŸãã
<a id='deepspeed-config-passing'></a>
### Passing Configuration
ãã®ããã¥ã¡ã³ãã§èª¬æããããã«ãéåžžãDeepSpeed èšå®ã¯ json ãã¡ã€ã«ãžã®ãã¹ãšããŠæž¡ãããŸããã
ãã¬ãŒãã³ã°ã®èšå®ã«ã³ãã³ã ã©ã€ã³ ã€ã³ã¿ãŒãã§ã€ã¹ã䜿çšããã代ããã«ã€ã³ã¹ã¿ã³ã¹ãäœæããŸãã
[`Trainer`] via [`TrainingArguments`] ãã®åŸã`deepspeed` åŒæ°ã«ã€ããŠã¯æ¬¡ã®ããšãã§ããŸã
ãã¹ãããã `dict` ãæž¡ããŸããããã«ããããã®å Žã§æ§æãäœæã§ãããããæžã蟌ãå¿
èŠããããŸããã
[`TrainingArguments`] ã«æž¡ãåã«ãã¡ã€ã« ã·ã¹ãã ã倿ŽããŸãã
èŠçŽãããšã次ã®ããšãã§ããŸãã
```python
TrainingArguments(..., deepspeed="/path/to/ds_config.json")
```
ãŸãã¯ïŒ
```python
ds_config_dict = dict(scheduler=scheduler_params, optimizer=optimizer_params)
TrainingArguments(..., deepspeed=ds_config_dict)
```
<a id='deepspeed-config-shared'></a>
### Shared Configuration
<Tip warning={true}>
ãã®ã»ã¯ã·ã§ã³ã¯å¿
èªã§ã
</Tip>
[`Trainer`] ãš DeepSpeed ã®äž¡æ¹ãæ£ããæ©èœããã«ã¯ãããã€ãã®èšå®å€ãå¿
èŠã§ãã
ãããã£ãŠãæ€åºãå°é£ãªãšã©ãŒã«ã€ãªããå¯èœæ§ã®ããå®çŸ©ã®ç«¶åãé²ãããã«ãããããæ§æããããšã«ããŸããã
[`Trainer`] ã³ãã³ãã©ã€ã³åŒæ°çµç±ã
ããã«ãäžéšã®æ§æå€ã¯ã¢ãã«ã®æ§æã«åºã¥ããŠèªåçã«å°åºãããŸãã
è€æ°ã®å€ãæåã§èª¿æŽããããšãå¿ããªãã§ãã ããã[`Trainer`] ã«å€§éšåãä»»ããã®ãæåã§ã
ã®èšå®ãè¡ããŸãã
ãããã£ãŠããã®ã¬ã€ãã®æ®ãã®éšåã§ã¯ãç¹å¥ãªèšå®å€ `auto` ã衚瀺ãããŸãããããèšå®ãããšã
æ£ããå€ãŸãã¯æãå¹ççãªå€ã«èªåçã«çœ®ãæããããŸãããããç¡èŠããããšãèªç±ã«éžæããŠãã ãã
æšå¥šäºé
ãåç
§ããå€ãæç€ºçã«èšå®ããŸãããã®å Žåãæ¬¡ã®ç¹ã«ååæ³šæããŠãã ããã
[`Trainer`] åŒæ°ãš DeepSpeed èšå®ã¯äžèŽããŸããããšãã°ãåããã®ã䜿çšããŠããŸãã
åŠç¿çãããããµã€ãºããŸãã¯åŸé
环ç©èšå®?ããããäžèŽããªãå Žåããã¬ãŒãã³ã°ã¯éåžžã«å€±æããå¯èœæ§ããããŸã
æ¹æ³ãæ€åºããã®ãé£ãããããªãã¯èŠåãåããŸããã
DeepSpeed ã®ã¿ã«åºæã®å€ããããã«åãããŠæåã§èšå®ããå¿
èŠãããå€ãä»ã«ãè€æ°ãããŸãã
ããªãã®èŠæã
ç¬èªã®ããã°ã©ã ã§ãDeepSpeed æ§æããã¹ã¿ãŒãšããŠå€æŽãããå Žåã¯ã次ã®ã¢ãããŒãã䜿çšããããšãã§ããŸãã
ããã«åºã¥ã㊠[`TrainingArguments`] ãèšå®ããŸããæé ã¯æ¬¡ã®ãšããã§ãã
1. ãã¹ã¿ãŒæ§æãšããŠäœ¿çšãã DeepSpeed æ§æãäœæãŸãã¯ããŒãããŸã
2. ãããã®å€ã«åºã¥ã㊠[`TrainingArguments`] ãªããžã§ã¯ããäœæããŸã
`scheduler.params.total_num_steps`ãªã©ã®äžéšã®å€ã¯æ¬¡ã®ããã«èšç®ãããããšã«æ³šæããŠãã ããã
`train` äžã« [`Trainer`] ãå®è¡ããŸããããã¡ããèªåã§èšç®ããããšãã§ããŸãã
<a id='deepspeed-zero'></a>
### ZeRO
[Zero Redundancy Optimizer (ZeRO)](https://www.deepspeed.ai/tutorials/zero/) ã¯ãDeepSpeed ã®äž»å補åã§ãããã
3 ã€ã®ç°ãªãã¬ãã« (段é) ã®æé©åããµããŒãããŸããæåã®ãã®ã¯ãã¹ã±ãŒã©ããªãã£ã®èгç¹ããã¯ããŸãè峿·±ããã®ã§ã¯ãããŸããã
ãããã£ãŠããã®ããã¥ã¡ã³ãã§ã¯ã¹ããŒãž 2 ãš 3 ã«çŠç¹ãåœãŠãŸããã¹ããŒãž 3 ã¯ãææ°ã® ZeRO-Infinity ã®è¿œå ã«ãã£ãŠããã«æ¹åãããŠããŸãã
詳现ã«ã€ããŠã¯ãDeepSpeed ã®ããã¥ã¡ã³ããåç
§ããŠãã ããã
æ§æãã¡ã€ã«ã® `zero_optimization` ã»ã¯ã·ã§ã³ã¯æãéèŠãªéšåã§ã ([docs](https://www.deepspeed.ai/docs/config-json/#zero-optimizations-for-fp16-training))ãããã§å®çŸ©ããŸã
ã©ã® ZeRO ã¹ããŒãžãæå¹ã«ãããããããŠããããã©ã®ããã«æ§æããããåãã©ã¡ãŒã¿ã®èª¬æã¯ã
DeepSpeed ã®ããã¥ã¡ã³ãã
ãã®ã»ã¯ã·ã§ã³ã¯ãDeepSpeed èšå®ãä»ããŠã®ã¿èšå®ããå¿
èŠããããŸã - [`Trainer`] ãæäŸããŸã
åçã®ã³ãã³ãã©ã€ã³åŒæ°ã¯ãããŸããã
泚: çŸåšãDeepSpeed ã¯ãã©ã¡ãŒã¿ãŒåãæ€èšŒããªããããã¹ãã«ãééãããšãããã©ã«ãèšå®ã䜿çšãããŸãã
ã¹ãã«ãééã£ãŠãããã©ã¡ãŒã¿ã DeepSpeed ãšã³ãžã³ã®èµ·åãã° ã¡ãã»ãŒãžãèŠãŠããã®å€ã確èªã§ããŸãã
䜿çšããã€ããã§ãã
<a id='deepspeed-zero2-config'></a>
#### ZeRO-2 Config
以äžã¯ãZeRO ã¹ããŒãž 2 ã®æ§æäŸã§ãã
```json
{
"zero_optimization": {
"stage": 2,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"allgather_partitions": true,
"allgather_bucket_size": 5e8,
"overlap_comm": true,
"reduce_scatter": true,
"reduce_bucket_size": 5e8,
"contiguous_gradients": true
}
}
```
**æ§èœèª¿æŽïŒ**
- `offload_optimizer` ãæå¹ã«ãããšãGPU RAM ã®äœ¿çšéãåæžãããŸã (`"stage": 2` ãå¿
èŠã§ã)
- `"overlap_comm": true` ã¯ãGPU RAM 䜿çšéã®å¢å ãšãã¬ãŒããªãããŠãé
å»¶ããã¹ãŠåæžããŸãã `overlap_comm`㯠4.5x ã䜿çšããŸã
`allgather_bucket_size`ãš`reduce_bucket_size`ã®å€ããããã£ãŠã5e8 ã«èšå®ãããŠããå Žåã9GB ãå¿
èŠã«ãªããŸãã
ãããããªã³ã (`5e8 x 2Bytes x 2 x 4.5`)ããããã£ãŠã8GB 以äžã® RAM ãæèŒãã GPU ã䜿çšããŠããå Žåã
OOM ãšã©ãŒãçºçããå Žåã¯ããããã®ãã©ã¡ãŒã¿ã`2e8`çšåºŠã«æžããå¿
èŠããããããã«ã¯ 3.6GB ãå¿
èŠã«ãªããŸãããããããªãã§ããã
OOM ã«éãå§ããŠããå Žåã¯ããã倧容éã® GPU ã§ãåæ§ã§ãã
- ãããã®ãããã¡ãæžãããšãããå€ãã® GPU RAM ãå©çšããããã«éä¿¡é床ãç ç²ã«ããããšã«ãªããŸãããããã¡ãµã€ãºãå°ããã»ã©ã
éä¿¡ãé
ããªããä»ã®ã¿ã¹ã¯ã§äœ¿çšã§ãã GPU RAM ãå¢ããŸãããããã£ãŠãããããµã€ãºã倧ããå Žåã¯ã
éèŠãªã®ã¯ããã¬ãŒãã³ã°æéãå°ãé
ãããããšã¯è¯ããã¬ãŒãã«ãªãå¯èœæ§ããããŸãã
ããã«ã`deepspeed==0.4.4`ã«ã¯ã次ã®ã³ãã³ãã§æå¹ã«ã§ããæ°ãããªãã·ã§ã³`round_robin_gradients`ã远å ãããŸããã
```json
{
"zero_optimization": {
"round_robin_gradients": true
}
}
```
ããã¯ããã现ããåŸé
ããŒãã£ã·ã§ãã³ã°ã«ãã£ãŠã©ã³ã¯éã® CPU ã¡ã¢ãªãžã®åŸé
ã³ããŒã䞊ååãããCPU ãªãããŒãã®ã¹ããŒãž 2 æé©åã§ããããã©ãŒãã³ã¹ã®å©ç¹ã¯ãåŸé
环ç©ã¹ããã (ãªããã£ãã€ã¶ãŒ ã¹ãããéã®ã³ããŒã®å¢å ) ãŸã㯠GPU æ° (䞊ååŠçã®å¢å ) ã«å¿ããŠå¢å ããŸãã
<a id='deepspeed-zero3-config'></a>
#### ZeRO-3 Config
以äžã¯ãZeRO ã¹ããŒãž 3 ã®æ§æäŸã§ãã
```json
{
"zero_optimization": {
"stage": 3,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"offload_param": {
"device": "cpu",
"pin_memory": true
},
"overlap_comm": true,
"contiguous_gradients": true,
"sub_group_size": 1e9,
"reduce_bucket_size": "auto",
"stage3_prefetch_bucket_size": "auto",
"stage3_param_persistence_threshold": "auto",
"stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9,
"stage3_gather_16bit_weights_on_model_save": true
}
}
```
ã¢ãã«ãŸãã¯ã¢ã¯ãã£ããŒã·ã§ã³ã GPU ã¡ã¢ãªã«é©åãããCPU ãæªäœ¿çšã§ããããã« OOM ãçºçããŠããå Žå
`"device": "cpu"` ã䜿çšããŠãªããã£ãã€ã¶ã®ç¶æ
ãšãã©ã¡ãŒã¿ã CPU ã¡ã¢ãªã«ã¡ã¢ãªãªãããŒããããšããã®å¶éã解決ãããå¯èœæ§ããããŸãã
CPU ã¡ã¢ãªã«ãªãããŒãããããªãå Žåã¯ã`device`ãšã³ããªã«`cpu`ã®ä»£ããã«`none`ã䜿çšããŸãããªãããŒãå
NVMe ã«ã€ããŠã¯åŸã»ã©èª¬æããŸãã
åºå®ã¡ã¢ãªã¯ã`pin_memory`ã`true`ã«èšå®ãããšæå¹ã«ãªããŸãããã®æ©èœã«ãããæ¬¡ã®ãããªã³ã¹ãããããŠã¹ã«ãŒããããåäžãããããšãã§ããŸãã
ä»ã®ããã»ã¹ã䜿çšã§ããã¡ã¢ãªãå°ãªããªããŸãããã³çããããã¡ã¢ãªã¯ããããèŠæ±ããç¹å®ã®ããã»ã¹ã®ããã«ç¢ºä¿ãããŸãã
éåžžãéåžžã® CPU ã¡ã¢ãªãããã¯ããã«é«éã«ã¢ã¯ã»ã¹ãããŸãã
**æ§èœèª¿æŽïŒ**
- `stage3_max_live_parameters`: `1e9`
- `stage3_max_reuse_distance`: `1e9`
OOM ã«éããå Žåã¯ããstage3_max_live_parametersããšãstage3_max_reuse_ distanceããæžãããŸãã圱é¿ã¯æå°éã«æããããã¯ãã§ã
ã¢ã¯ãã£ãåãã§ãã¯ãã€ã³ããå®è¡ããªãéããããã©ãŒãã³ã¹ã«åœ±é¿ããŸãã `1e9`ã¯çŽ 2GB ãæ¶è²»ããŸããèšæ¶ãå
±æããŠããã®ã¯ã
`stage3_max_live_parameters` ãš `stage3_max_reuse_distance` ãªã®ã§ãå ç®ããããã®ã§ã¯ãªããåèšã§ 2GB ã«ãªããŸãã
`stage3_max_live_parameters` ã¯ãç¹å®ã®æç¹ã§ GPU äžã«ä¿æããå®å
šãªãã©ã¡ãŒã¿ã®æ°ã®äžéã§ãã
æéã ãåå©çšè·é¢ãã¯ããã©ã¡ãŒã¿ãå°æ¥ãã€åã³äœ¿çšããããã倿ããããã«äœ¿çšããææšã§ãã
`stage3_max_reuse_ distance`ã䜿çšããŠããã©ã¡ãŒã¿ãç Žæ£ãããä¿æããããæ±ºå®ããŸãããã©ã¡ãŒã¿ã
è¿ãå°æ¥ã«åã³äœ¿çšãããäºå® (`stage3_max_reuse_distance`æªæº) ãªã®ã§ãéä¿¡ãæžããããã«ä¿æããŸãã
ãªãŒããŒããããããã¯ãã¢ã¯ãã£ããŒã·ã§ã³ ãã§ãã¯ãã€ã³ããæå¹ã«ããŠããå Žåã«éåžžã«åœ¹ç«ã¡ãŸãããã©ã¯ãŒãåèšç®ãè¡ããã
backward ã¯åäžã¬ã€ã€ãŒç²åºŠãæž¡ããåŸæ¹åèšç®ãŸã§ãã©ã¡ãŒã¿ãåæ¹åèšç®ã«ä¿æããããšèããŠããŸãã
æ¬¡ã®æ§æå€ã¯ãã¢ãã«ã®é衚瀺ãµã€ãºã«ãã£ãŠç°ãªããŸãã
- `reduce_bucket_size`: `hidden_size*hidden_size`
- `stage3_prefetch_bucket_size`: `0.9 * hidden_size * hidden_size`
- `stage3_param_persistence_threshold`: `10 * hidden_size`
ãããã£ãŠããããã®å€ã `auto` ã«èšå®ãããšã[`Trainer`] ãæšå¥šãããå€ãèªåçã«å²ãåœãŠãŸãã
䟡å€èгããã ãããã¡ãããããããæç€ºçã«èšå®ããããšãã§ããŸãã
`stage3_gather_16bit_weights_on_model_save` ã¯ãã¢ãã«ã®ä¿åæã«ã¢ãã« fp16 ã®éã¿çµ±åãæå¹ã«ããŸãã倧ãã
ã¢ãã«ãšè€æ°ã® GPU ã®å Žåãããã¯ã¡ã¢ãªãšé床ã®äž¡æ¹ã®ç¹ã§é«äŸ¡ãªæäœã§ããçŸåšå¿
é ãšãªã£ãŠããã®ã¯ã
ãã¬ãŒãã³ã°ãåéããäºå®ã§ãããã®å¶éãåãé€ãããã䟿å©ã«ããä»åŸã®ã¢ããããŒãã«æ³šç®ããŠãã ããã
ãã¬ãã·ãã«ã
ZeRO-2 æ§æããç§»è¡ããŠããå Žåã¯ã`allgather_partitions`ã`allgather_bucket_size`ãããã³
`reduce_scatter`èšå®ãã©ã¡ãŒã¿ã¯ ZeRO-3 ã§ã¯äœ¿çšãããŸãããããããèšå®ãã¡ã€ã«ã«ä¿åããŠãããšã
ç¡èŠãããã
- `sub_group_size`: `1e9`
`sub_group_size` ã¯ããªããã£ãã€ã¶ãŒã®ã¹ãããäžã«ãã©ã¡ãŒã¿ãŒãæŽæ°ãããç²åºŠãå¶åŸ¡ããŸãããã©ã¡ãŒã¿ã¯æ¬¡ã®ãšããã§ãã
`sub_group_size` ã®ãã±ããã«ã°ã«ãŒãåãããåãã±ããã¯äžåºŠã« 1 ã€ãã€æŽæ°ãããŸãã NVMeãªãããŒãã§äœ¿çšããå Žå
ãããã£ãŠãZeRO-Infinity ã® `sub_group_size`ã¯ãã¢ãã«ã®ç¶æ
ã CPU ã«åºå
¥ãããç²åºŠãå¶åŸ¡ããŸãã
ãªããã£ãã€ã¶ã¹ãããäžã« NVMe ããã¡ã¢ãªãååŸããŸããããã«ãããéåžžã«å€§èŠæš¡ãªã¢ãã«ã® CPU ã¡ã¢ãªäžè¶³ã鲿¢ãããŸãã
NVMe ãªãããŒãã䜿çšããªãå Žåã¯ã`sub_group_size`ãããã©ã«ãå€ã® *1e9* ã®ãŸãŸã«ããããšãã§ããŸãã倿Žããããšãã§ããŸã
次ã®å Žåã®ããã©ã«ãå€:
1. ãªããã£ãã€ã¶ãŒ ã¹ãããäžã« OOM ãçºçãã: `sub_group_size` ãæžãããŠãäžæãããã¡ãŒã®ã¡ã¢ãªäœ¿çšéãåæžããŸãã
2. ãªããã£ãã€ã¶ãŒ ã¹ãããã«æéãããããŸãã`sub_group_size`ãå¢ãããŠã垯åå¹
ã®äœ¿çšçãåäžãããŸãã
ããŒã¿ãããã¡ã®å¢å ã
#### ZeRO-0 Config
ã¹ããŒãž 0 ãš 1 ã¯ãã£ãã«äœ¿çšãããªããããæåŸã«ãªã¹ãããŠããããšã«æ³šæããŠãã ããã
ã¹ããŒãž 0 ã§ã¯ããã¹ãŠã®ã¿ã€ãã®ã·ã£ãŒãã£ã³ã°ãç¡å¹ã«ããDDP ãšã㊠DeepSpeed ã®ã¿ã䜿çšããŸããæ¬¡ã®ã³ãã³ãã§ãªã³ã«ã§ããŸãã
```json
{
"zero_optimization": {
"stage": 0
}
}
```
ããã«ãããä»ã«äœã倿Žããå¿
èŠããªããåºæ¬çã« ZeRO ãç¡å¹ã«ãªããŸãã
#### ZeRO-1 Config
ã¹ããŒãž 1 ã¯ãã¹ããŒãž 2 ããã°ã©ããŒã·ã§ã³ ã·ã£ãŒãã£ã³ã°ãé€ãããã®ã§ãããªããã£ãã€ã¶ãŒã®ç¶æ
ãã·ã£ãŒãåããã ãã§ãåŠçãå°ãé«éåããããã«ãã€ã§ã詊ãããšãã§ããŸãã
```json
{
"zero_optimization": {
"stage": 1
}
}
```
<a id='deepspeed-nvme'></a>
### NVMe Support
ZeRO-Infinity ã¯ãGPU ãš CPU ã¡ã¢ãªã NVMe ã¡ã¢ãªã§æ¡åŒµããããšã§ãéåžžã«å€§èŠæš¡ãªã¢ãã«ã®ãã¬ãŒãã³ã°ãå¯èœã«ããŸãããããã§
ã¹ããŒã ããŒãã£ã·ã§ãã³ã°ããã³ã¿ã€ãªã³ã° ã¢ã«ãŽãªãºã ã§ã¯ãå GPU ãéåžžã«å°éã®ããŒã¿ãéåä¿¡ããå¿
èŠããããŸãã
ãªãããŒãã«ãããææ°ã® NVMe ããã¬ãŒãã³ã°ã«å©çšã§ããåèšã¡ã¢ãª ããŒã«ãããã«å€§ããããã®ã«é©ããŠããããšã倿ããŸããã
ããã»ã¹ã ZeRO-Infinity ã«ã¯ãZeRO-3 ãæå¹ã«ãªã£ãŠããå¿
èŠããããŸãã
次ã®èšå®äŸã§ã¯ãNVMe ããªããã£ãã€ã¶ã®ç¶æ
ãšãã©ã¡ãŒã¿ã®äž¡æ¹ããªãããŒãã§ããããã«ããŸãã
```json
{
"zero_optimization": {
"stage": 3,
"offload_optimizer": {
"device": "nvme",
"nvme_path": "/local_nvme",
"pin_memory": true,
"buffer_count": 4,
"fast_init": false
},
"offload_param": {
"device": "nvme",
"nvme_path": "/local_nvme",
"pin_memory": true,
"buffer_count": 5,
"buffer_size": 1e8,
"max_in_cpu": 1e9
},
"aio": {
"block_size": 262144,
"queue_depth": 32,
"thread_count": 1,
"single_submit": false,
"overlap_events": true
},
"overlap_comm": true,
"contiguous_gradients": true,
"sub_group_size": 1e9,
"reduce_bucket_size": "auto",
"stage3_prefetch_bucket_size": "auto",
"stage3_param_persistence_threshold": "auto",
"stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9,
"stage3_gather_16bit_weights_on_model_save": true
},
}
```
ãªããã£ãã€ã¶ã®ç¶æ
ãšãã©ã¡ãŒã¿ã®äž¡æ¹ã NVMe ã«ãªãããŒãããããã©ã¡ãã 1 ã€ã ãããªãããŒããããããŸã£ãããªãããŒãããªãããéžæã§ããŸããããšãã°ã次ã®å Žå
å©çšå¯èœãª CPU ã¡ã¢ãªã倧éã«ããå Žåã¯ãé«éã«ãªããããå¿
ã CPU ã¡ã¢ãªã®ã¿ã«ãªãããŒãããŠãã ãã (ãã³ã:
*"device": "CPU"*)ã
[ãªããã£ãã€ã¶ãŒã®ç¶æ
](https://www.deepspeed.ai/docs/config-json/#optimizer-offloading) ãš [ãã©ã¡ãŒã¿ãŒ](https://www.deepspeed.ai/docs/config-json/#parameter-offloading)ã
`nvme_path`ãå®éã« NVMe ã§ããããšã確èªããŠãã ãããNVMe ã¯éåžžã®ããŒããã©ã€ããŸã㯠SSD ã§åäœããŸããã
ã¯ããã«é
ããªããŸããé«éã¹ã±ãŒã©ãã«ãªãã¬ãŒãã³ã°ã¯ãææ°ã® NVMe 転éé床ã念é ã«çœ®ããŠèšèšãããŸãã (ãã®æç¹ã§ã¯
æžã蟌ã¿ã§ã¯ãèªã¿åãæå€§ 3.5 GB/ç§ãæžãèŸŒã¿æå€§ 3 GB/ç§ã®ããŒã¯é床ãåŸãããŸã)ã
æé©ãª`aio`æ§æãããã¯ãèŠã€ããã«ã¯ãã¿ãŒã²ããèšå®ã§ãã³ãããŒã¯ãå®è¡ããå¿
èŠããããŸãã
[ããã§èª¬æ](https://github.com/microsoft/DeepSpeed/issues/998)ã
<a id='deepspeed-zero2-zero3-performance'></a>
#### ZeRO-2 vs ZeRO-3 Performance
ZeRO-3 ã¯ãä»ã®ãã¹ãŠãåãããã«æ§æãããŠããå ŽåãZeRO-2 ãããé
ããªãå¯èœæ§ããããŸããåè
ã¯åéããå¿
èŠãããããã§ãã
ZeRO-2 ã®æ©èœã«å ããŠã¢ãã«ã®éã¿ä»ããè¡ããŸãã ZeRO-2 ãããŒãºãæºãããæ°åã® GPU ãè¶
ããŠæ¡åŒµããå¿
èŠããªãå Žå
ããããã°ãããã«åºå·ããããšãéžæããããšãã§ããŸãã ZeRO-3 ã«ãããã¯ããã«é«ãã¹ã±ãŒã©ããªãã£å®¹éãå¯èœã«ãªãããšãçè§£ããããšãéèŠã§ã
ã¹ããŒããç ç²ã«ããŠã
ZeRO-3 ã®æ§æã調æŽããŠãZeRO-2 ã«è¿ã¥ããããšãã§ããŸãã
- `stage3_param_persistence_threshold` ãéåžžã«å€§ããªæ°å€ã«èšå®ããŸããããšãã°ã`6 * hidden_ââsize * hidden_ââsize` ã®ããã«ãæå€§ââãã©ã¡ãŒã¿ããã倧ãããªããŸããããã«ããããã©ã¡ãŒã¿ã GPU ã«ä¿æãããŸãã
- ZeRO-2 ã«ã¯ãã®ãªãã·ã§ã³ããªãããã`offload_params` ããªãã«ããŸãã
倿ŽããªããŠãã`offload_params`ããªãã«ããã ãã§ããã©ãŒãã³ã¹ã倧å¹
ã«åäžããå¯èœæ§ããããŸãã
`stage3_param_persistence_threshold`ããã¡ããããããã®å€æŽã¯ãã¬ãŒãã³ã°ã§ããã¢ãã«ã®ãµã€ãºã«åœ±é¿ããŸããããã§
ãããã¯ãããŒãºã«å¿ããŠãã¹ã±ãŒã©ããªãã£ãšåŒãæãã«é床ãåäžãããã®ã«åœ¹ç«ã¡ãŸãã
<a id='deepspeed-zero2-example'></a>
#### ZeRO-2 Example
以äžã¯ãå®å
šãª ZeRO-2 èªåæ§æãã¡ã€ã« `ds_config_zero2.json` ã§ãã
```json
{
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"betas": "auto",
"eps": "auto",
"weight_decay": "auto"
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto"
}
},
"zero_optimization": {
"stage": 2,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"allgather_partitions": true,
"allgather_bucket_size": 2e8,
"overlap_comm": true,
"reduce_scatter": true,
"reduce_bucket_size": 2e8,
"contiguous_gradients": true
},
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"steps_per_print": 2000,
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false
}
```
以äžã¯ãæåã§èšå®ãããå®å
šãª ZeRO-2 ã®ãã¹ãŠãæå¹ãªæ§æãã¡ã€ã«ã§ããããã§ã¯äž»ã«ãå
žåçãªãã®ã確èªããããã®ãã®ã§ãã
å€ã¯æ¬¡ã®ããã«ãªããŸãããè€æ°ã®`auto`èšå®ãå«ãŸããå€ã䜿çšããããšã匷ããå§ãããŸãã
```json
{
"fp16": {
"enabled": true,
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": 3e-5,
"betas": [0.8, 0.999],
"eps": 1e-8,
"weight_decay": 3e-7
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": 0,
"warmup_max_lr": 3e-5,
"warmup_num_steps": 500
}
},
"zero_optimization": {
"stage": 2,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"allgather_partitions": true,
"allgather_bucket_size": 2e8,
"overlap_comm": true,
"reduce_scatter": true,
"reduce_bucket_size": 2e8,
"contiguous_gradients": true
},
"steps_per_print": 2000,
"wall_clock_breakdown": false
}
```
<a id='deepspeed-zero3-example'></a>
#### ZeRO-3 Example
以äžã¯ãå®å
šãª ZeRO-3 èªåæ§æãã¡ã€ã«`ds_config_zero3.json`ã§ãã
```json
{
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"betas": "auto",
"eps": "auto",
"weight_decay": "auto"
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto"
}
},
"zero_optimization": {
"stage": 3,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"offload_param": {
"device": "cpu",
"pin_memory": true
},
"overlap_comm": true,
"contiguous_gradients": true,
"sub_group_size": 1e9,
"reduce_bucket_size": "auto",
"stage3_prefetch_bucket_size": "auto",
"stage3_param_persistence_threshold": "auto",
"stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9,
"stage3_gather_16bit_weights_on_model_save": true
},
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"steps_per_print": 2000,
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false
}
```
以äžã¯ãæåã§èšå®ãããå®å
šãª ZeRO-3 ã®ãã¹ãŠãæå¹ãªæ§æãã¡ã€ã«ã§ããããã§ã¯äž»ã«ãå
žåçãªãã®ã確èªããããã®ãã®ã§ãã
å€ã¯æ¬¡ã®ããã«ãªããŸãããè€æ°ã®`auto`èšå®ãå«ãŸããå€ã䜿çšããããšã匷ããå§ãããŸãã
```json
{
"fp16": {
"enabled": true,
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": 3e-5,
"betas": [0.8, 0.999],
"eps": 1e-8,
"weight_decay": 3e-7
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": 0,
"warmup_max_lr": 3e-5,
"warmup_num_steps": 500
}
},
"zero_optimization": {
"stage": 3,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"offload_param": {
"device": "cpu",
"pin_memory": true
},
"overlap_comm": true,
"contiguous_gradients": true,
"sub_group_size": 1e9,
"reduce_bucket_size": 1e6,
"stage3_prefetch_bucket_size": 0.94e6,
"stage3_param_persistence_threshold": 1e4,
"stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9,
"stage3_gather_16bit_weights_on_model_save": true
},
"steps_per_print": 2000,
"wall_clock_breakdown": false
}
```
#### How to Choose Which ZeRO Stage and Offloads To Use For Best Performance
ããã§ãããŸããŸãªæ®µéãããããšãããããŸãããã©ã¡ãã䜿çšããããã©ã®ããã«æ±ºå®ããã°ããã§ãããã?ãã®ã»ã¯ã·ã§ã³ã§ã¯ããã®è³ªåã«çããŠãããŸãã
äžè¬ã«ã次ã®ããšãåœãŠã¯ãŸããŸãã
- é床ã®ç¹ïŒå·Šã®æ¹ãå³ããéãïŒ
ã¹ããŒãž 0 (DDP) > ã¹ããŒãž 1 > ã¹ããŒãž 2 > ã¹ããŒãž 2 + ãªãããŒã > ã¹ããŒãž 3 > ã¹ããŒãž 3 + ãªãããŒã
- GPU ã¡ã¢ãªã®äœ¿çšç¶æ³ (å³ã¯å·Šããã GPU ã¡ã¢ãªå¹çãé«ã)
ã¹ããŒãž 0 (DDP) < ã¹ããŒãž 1 < ã¹ããŒãž 2 < ã¹ããŒãž 2 + ãªãããŒã < ã¹ããŒãž 3 < ã¹ããŒãž 3 + ãªãããŒã
ãããã£ãŠãæå°éã®æ°ã® GPU ã«åãŸããªããæéã®å®è¡ãå®çŸãããå Žåã¯ã次ã®ããã»ã¹ã«åŸãããšãã§ããŸããæãéãã¢ãããŒãããéå§ããGPU OOM ã«é¥ã£ãå Žåã¯ã次ã«é
ãã¢ãããŒãã«é²ã¿ãŸãããããã«ãã䜿çšããã GPU ã¡ã¢ãªãå°ãªããªããŸãããªã©ãªã©ã
ãŸããããã ãµã€ãºã 1 ã«èšå®ããŸã (å¿
èŠãªæå¹ããã ãµã€ãºã«å¯ŸããŠããã€ã§ãåŸé
环ç©ã䜿çšã§ããŸã)ã
1. `--gradient_checkpointing 1` (HF Trainer) ãŸãã¯çŽæ¥ `model.gradient_checkpointing_enable()` ãæå¹ã«ããŸã - OOM ã®å Žå
2. æåã« ZeRO ã¹ããŒãž 2 ã詊ããŠãã ããã OOMã®å Žå
3. ZeRO ã¹ããŒãž 2 + `offload_optimizer` ã詊ããŸã - OOM ã®å Žå
4. ZeRO ã¹ããŒãž 3 ã«åãæ¿ãã - OOM ã®å Žå
5. `cpu` ã«å¯Ÿã㊠`offload_param` ãæå¹ã«ããŸã - OOM ã®å Žå
6. OOM ã®å Žåã¯ã`cpu`ã«å¯ŸããŠ`offload_optimizer`ãæå¹ã«ããŸãã
7. ããã§ãããã ãµã€ãº 1 ã«é©åããªãå Žåã¯ããŸãããŸããŸãªããã©ã«ãå€ã確èªããå¯èœã§ããã°å€ãäžããŸããããšãã°ã`generate`ã䜿çšããåºãæ€çŽ¢ããŒã ã䜿çšããªãå Žåã¯ã倧éã®ã¡ã¢ãªãæ¶è²»ãããããæ€çŽ¢ããŒã ãçãããŸãã
8. fp32 ã§ã¯å¿
ãæ··åå粟床ã䜿çšããŸããã€ãŸããAmpere 以äžã® GPU ã§ã¯ bf16ãå€ã GPU ã¢ãŒããã¯ãã£ã§ã¯ fp16 ã䜿çšããŸãã
9. ããã§ã OOM ãè¡ãå Žåã¯ãããŒããŠã§ã¢ã远å ããããZeRO-Infinity ãæå¹ã«ããããšãã§ããŸããã€ãŸãããªãããŒã `offload_param` ãš `offload_optimizer` ã `nvme` ã«åãæ¿ããŸããéåžžã«é«é㪠nvme ã§ããããšã確èªããå¿
èŠããããŸããéžè©±ãšããŠãZeRO-Infinity ã䜿çšããŠå°ã㪠GPU ã§ BLOOM-176B ãæšè«ããããšãã§ããŸããããéåžžã«é
ãã£ãã§ããã§ããããŸããããŸããïŒ
ãã¡ãããæã GPU ã¡ã¢ãªå¹çã®é«ãæ§æããå§ããŠãåŸããéã«é²ãããšã§ããããã®æé ãéã«å®è¡ããããšãã§ããŸãããããã¯äºçåããŠã¿ãŠãã ããã
OOM ãåŒãèµ·ãããªãããã ãµã€ãº 1 ãååŸããããå®å¹ã¹ã«ãŒããããæž¬å®ããŸãã
次ã«ãããã ãµã€ãºãã§ããã ã倧ããããŠã¿ãŸããããã ãµã€ãºã倧ããã»ã©ãä¹ç®ããè¡åã巚倧ãªå Žåã« GPU ã®ããã©ãŒãã³ã¹ãæé«ã«ãªããããGPU ã®å¹çãåäžããŸãã
ããã§ãããã©ãŒãã³ã¹æé©åã²ãŒã ãå§ãŸããŸããäžéšã®ãªãããŒãæ©èœããªãã«ããããZeRO 段éã§ã¹ãããããŠã³ããŠããã ãµã€ãºã墿žããŠãå®å¹ã¹ã«ãŒããããå床枬å®ããããšãã§ããŸããæºè¶³ãããŸã§æŽãæµããç¹°ãè¿ããŸãã
æ°žé ã«ããã«è²»ããå¿
èŠã¯ãããŸãããã3 ãæã®ãã¬ãŒãã³ã°ãéå§ããããšããŠããå Žåã¯ãã¹ã«ãŒãããã«é¢ããŠæã广çãªèšå®ãèŠã€ããããã«æ°æ¥ãããŠãã ããããã®ããããã¬ãŒãã³ã°ã®ã³ã¹ããæå°éã«ãªãããã¬ãŒãã³ã°ãããæ©ãå®äºã§ããŸããçŸåšã®ç®ãŸããããå€åãã ML ã®äžçã§ã¯ãäœãããã¬ãŒãã³ã°ããã®ã«ããã« 1 ãæãããå Žåãçµ¶å¥œã®æ©äŒãéãå¯èœæ§ããããŸãããã¡ãããããã¯ç§ãæèŠãå
±æããŠããã ãã§ãããæ±ºããŠããªããæ¥ããããšããŠããããã§ã¯ãããŸããã BLOOM-176B ã®ãã¬ãŒãã³ã°ãéå§ããåã«ããã®ããã»ã¹ã« 2 æ¥éè²»ãããã¹ã«ãŒãããã 90 TFLOP ãã 150 TFLOP ã«åäžãããããšãã§ããŸããããã®åãçµã¿ã«ããããã¬ãŒãã³ã°æéã 1 ãæä»¥äžç¯çŽã§ããŸããã
ãããã®ã¡ã¢ã¯äž»ã«ãã¬ãŒãã³ã° ã¢ãŒãçšã«æžããããã®ã§ãããã»ãšãã©ã®å Žåã¯æšè«ã«ãé©çšãããã¯ãã§ããããšãã°ãåŸé
ãã§ãã¯ãã€ã³ãã¯ãã¬ãŒãã³ã°äžã«ã®ã¿åœ¹ç«ã€ãããæšè«äžã¯äœãè¡ãããŸãããããã«ããã«ã GPU æšè«ãå®è¡ããŠããŠã[DeepSpeed-Inference](https://www.deepspeed.ai/tutorials/inference-tutorial/)ã[Accelerate](https://ãã°ãã§ã€ã¹.co/blog/bloom-inference-pytorch-scripts) ã¯åªããããã©ãŒãã³ã¹ãæäŸããã¯ãã§ãã
ãã®ä»ã®ããã©ãŒãã³ã¹é¢é£ã®ç°¡åãªã¡ã¢:
- äœããæåãããã¬ãŒãã³ã°ããŠããå Žåã¯ãåžžã« 16 ã§å²ãåãã圢ç¶ã®ãã³ãœã« (é ãããµã€ãºãªã©) ã䜿çšããããã«ããŠãã ãããããã ãµã€ãºã«ã€ããŠã¯ãå°ãªããšã 2 ã§å²ãåããããã«ããŠãã ããã GPU ããããã«é«ãããã©ãŒãã³ã¹ãåŒãåºãããå Žåã¯ãããŒããŠã§ã¢åºæã® [æ³¢ãšã¿ã€ã«ã®éåå](https://developer.nvidia.com/blog/optimizing-gpu-performance-tensor-cores/) ã®å¯åæ§ããããŸãã
### Activation Checkpointing or Gradient Checkpointing
ã¢ã¯ãã£ããŒã·ã§ã³ ãã§ãã¯ãã€ã³ããšåŸé
ãã§ãã¯ãã€ã³ãã¯ãåãæ¹æ³è«ãæã 2 ã€ã®ç°ãªãçšèªã§ãããšãŠããããããã§ããããããªæãã§ãã
åŸé
ãã§ãã¯ãã€ã³ãã䜿çšãããšãé床ã GPU ã¡ã¢ãªãšåŒãæãã«ã§ããŸããããã«ãããGPU OOM ãå
æããããããã ãµã€ãºãå¢ããããšãã§ããå€ãã®å Žåãããã©ãŒãã³ã¹ã®åäžã«ã€ãªãããŸãã
HF Transformers ã¢ãã«ã¯ãDeepSpeed ã®ã¢ã¯ãã£ããŒã·ã§ã³ ãã§ãã¯ãã€ã³ãã«ã€ããŠäœãç¥ããªããããDeepSpeed æ§æãã¡ã€ã«ã§ãã®æ©èœãæå¹ã«ããããšããŠããäœãèµ·ãããŸããã
ãããã£ãŠããã®éåžžã«æçãªæ©èœã掻çšããã«ã¯ 2 ã€ã®æ¹æ³ããããŸãã
1. HF Transformers ã¢ãã«ã䜿çšãããå Žåã¯ã`model.gradient_checkpointing_enable()` ãå®è¡ããããHF ãã¬ãŒããŒã§ `--gradient_checkpointing` ã䜿çšããŸããããã«ããããããèªåçã«æå¹ã«ãªããŸããããã§äœ¿ãããã®ã `torch.utils.checkpoint` ã§ãã
2. ç¬èªã®ã¢ãã«ãäœæããDeepSpeed ã®ã¢ã¯ãã£ããŒã·ã§ã³ ãã§ãã¯ãã€ã³ãã䜿çšãããå Žåã¯ã[ããã§èŠå®ãããŠãã API](https://deepspeed.readthedocs.io/en/latest/activation-checkpointing.html) ã䜿çšã§ããŸãã HF Transformers ã¢ããªã³ã° ã³ãŒãã䜿çšããŠã`torch.utils.checkpoint` ã DeepSpeed ã® API ã«çœ®ãæããããšãã§ããŸããåŸè
ã¯ãé æ¹åã¢ã¯ãã£ããŒã·ã§ã³ãåèšç®ãã代ããã« CPU ã¡ã¢ãªã«ãªãããŒãã§ãããããããæè»ã§ãã
### Optimizer and Scheduler
`offload_optimizer`ãæå¹ã«ããªãéããDeepSpeed ã¹ã±ãžã¥ãŒã©ãŒãš HuggingFace ã¹ã±ãžã¥ãŒã©ãŒãçµã¿åãããŠäœ¿çšââã§ããŸãã
ãªããã£ãã€ã¶ãŒ (HuggingFace ã¹ã±ãžã¥ãŒã©ãŒãš DeepSpeed ãªããã£ãã€ã¶ãŒã®çµã¿åãããé€ã):
| Combos | HF Scheduler | DS Scheduler |
|:-------------|:-------------|:-------------|
| HF Optimizer | Yes | Yes |
| DS Optimizer | No | Yes |
`offload_optimizer`ãæå¹ãªå ŽåãCPU ãš
GPU å®è£
(LAMB ãé€ã)ã
<a id='deepspeed-optimizer'></a>
#### Optimizer
DeepSpeed ã®äž»ãªãªããã£ãã€ã¶ãŒã¯ãAdamãAdamWãOneBitAdamãLamb ã§ããããã㯠ZeRO ã§åŸ¹åºçã«ãã¹ããããŠããã
ãããã£ãŠã䜿çšããããšããå§ãããŸãããã ããä»ã®ãªããã£ãã€ã¶ããtorchãããã€ã³ããŒãããããšã¯ã§ããŸããå®å
šãªããã¥ã¡ã³ã㯠[ãã¡ã](https://www.deepspeed.ai/docs/config-json/#optimizer-parameters) ã«ãããŸãã
èšå®ãã¡ã€ã«ã§ `optimizer` ãšã³ããªãèšå®ããªãå Žåã[`Trainer`] ã¯
èªåçã«`AdamW`ã«èšå®ãããæå®ãããå€ãŸãã¯æ¬¡ã®ã³ãã³ãã©ã€ã³ã®ããã©ã«ãã䜿çšãããŸãã
åŒæ°: `--learning_rate`ã`--adam_beta1`ã`--adam_beta2`ã`--adam_epsilon`ãããã³ `--weight_decay`ã
以äžã¯ã`AdamW`ã®èªåæ§æããã`optimizer`ãšã³ããªã®äŸã§ãã
```json
{
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"betas": "auto",
"eps": "auto",
"weight_decay": "auto"
}
}
}
```
ã³ãã³ãã©ã€ã³åŒæ°ã«ãã£ãŠæ§æãã¡ã€ã«å
ã®å€ãèšå®ãããããšã«æ³šæããŠãã ããããã㯠1 ã€ããããã§ã
å€ã®æ±ºå®çãªãœãŒã¹ãæäŸããããšãã°åŠç¿çãæ¬¡ã®ããã«èšå®ãããŠããå Žåã«ãèŠã€ãã«ãããšã©ãŒãåé¿ããŸãã
ããŸããŸãªå Žæã§ããŸããŸãªäŸ¡å€èгãã³ãã³ãã©ã€ã³ã®ã«ãŒã«ããªãŒããŒã©ã€ããããå€ã¯æ¬¡ã®ãšããã§ãã
- `lr` ãš `--learning_rate` ã®å€
- `betas` ãš `--adam_beta1 --adam_beta2` ã®å€
- `eps` ãš `--adam_epsilon` ã®å€
- `weight_decay` ãš `--weight_decay` ã®å€
ãããã£ãŠãã³ãã³ãã©ã€ã³ã§å
±æãã€ããŒãã©ã¡ãŒã¿ã調æŽããããšãå¿ããªãã§ãã ããã
å€ãæç€ºçã«èšå®ããããšãã§ããŸãã
```json
{
"optimizer": {
"type": "AdamW",
"params": {
"lr": 0.001,
"betas": [0.8, 0.999],
"eps": 1e-8,
"weight_decay": 3e-7
}
}
}
```
ãã ãã[`Trainer`] ã³ãã³ãã©ã€ã³åŒæ°ãš DeepSpeed ãèªåã§åæããããšã«ãªããŸãã
æ§æã
äžèšã«ãªã¹ããããŠããªãå¥ã®ãªããã£ãã€ã¶ãŒã䜿çšããå Žåã¯ããããã¬ãã«ã®æ§æã«è¿œå ããå¿
èŠããããŸãã
```json
{
"zero_allow_untested_optimizer": true
}
```
`AdamW`ãšåæ§ã«ãå
¬åŒã«ãµããŒããããŠããä»ã®ãªããã£ãã€ã¶ãŒãæ§æã§ããŸãããããã¯ç°ãªãèšå®å€ãæã€å¯èœæ§ãããããšã«æ³šæããŠãã ãããäŸãã°Adam ã®å Žåã¯ã`weight_decay`ã`0.01`ä»è¿ã«ããå¿
èŠããããŸãã
ããã«ããªãããŒãã¯ãDeepspeed ã® CPU Adam ãªããã£ãã€ã¶ãŒãšäœµçšãããšæã广çã«æ©èœããŸãã `deepspeed==0.8.3` ãªã®ã§ããªãããŒãã§å¥ã®ãªããã£ãã€ã¶ãŒã䜿çšãããå Žåã¯ã以äžã远å ããå¿
èŠããããŸãã
```json
{
"zero_force_ds_cpu_optimizer": false
}
```
æäžäœã®æ§æã«ç§»è¡ããŸãã
<a id='deepspeed-scheduler'></a>
#### Scheduler
DeepSpeed ã¯ã`LRRangeTest`ã`OneCycle`ã`WarmupLR`ãããã³`WarmupDecayLR`åŠç¿çã¹ã±ãžã¥ãŒã©ãŒããµããŒãããŠããŸããå®å
šãª
ããã¥ã¡ã³ãã¯[ãã](https://www.deepspeed.ai/docs/config-json/#scheduler-parameters)ã§ãã
ããã§ã¯ãð€ Transformers ãš DeepSpeed ã®éã§ã¹ã±ãžã¥ãŒã©ãŒãéè€ããå Žæã瀺ããŸãã
- `--lr_scheduler_type constant_with_warmup` çµç±ã® `WarmupLR`
- `--lr_scheduler_type Linear` ãä»ãã `WarmupDecayLR`ããã㯠`--lr_scheduler_type` ã®ããã©ã«ãå€ã§ããããŸãã
ãããã£ãŠãã¹ã±ãžã¥ãŒã©ãèšå®ããªãå Žåããããããã©ã«ãã§èšå®ãããã¹ã±ãžã¥ãŒã©ã«ãªããŸãã
èšå®ãã¡ã€ã«ã§ `scheduler` ãšã³ããªãèšå®ããªãå Žåã[`Trainer`] ã¯
`--lr_scheduler_type`ã`--learning_rate`ãããã³ `--warmup_steps` ãŸã㯠`--warmup_ratio` ã®å€ãèšå®ããŸãã
ð€ ããã®ãã©ã³ã¹ãã©ãŒããŒããŒãžã§ã³ã
以äžã¯ã`WarmupLR`ã®èªåæ§æããã`scheduler`ãšã³ããªã®äŸã§ãã
```json
{
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto"
}
}
}
```
*"auto"* ã䜿çšãããŠããããã[`Trainer`] åŒæ°ã¯èšå®ã«æ£ããå€ãèšå®ããŸãã
ãã¡ã€ã«ãããã¯ãå€ã®æ±ºå®çãªãœãŒã¹ã 1 ã€ããããšãšãããšãã°æ¬¡ã®ãããªå Žåã«èŠã€ãã«ãããšã©ãŒãé¿ããããã§ãã
åŠç¿çã¯ãå Žæããšã«ç°ãªãå€ã«èšå®ãããŸããã³ãã³ãã©ã€ã³ã®ã«ãŒã«ãèšå®ãããå€ã¯æ¬¡ã®ãšããã§ãã
- `warmup_min_lr` ã®å€ã¯ `0` ã§ãã
- `warmup_max_lr` ãš `--learning_rate` ã®å€ã
- `warmup_num_steps` ãš `--warmup_steps` ã®å€ (æå®ãããŠããå Žå)ããã以å€ã®å Žå㯠`--warmup_ratio` ã䜿çšããŸã
ãã¬ãŒãã³ã° ã¹ãããã®æ°ãä¹ç®ããåãäžããŸãã
- `total_num_steps` ã«ã¯ `--max_steps` ã®å€ãæå®ããããæå®ãããŠããªãå Žåã¯å®è¡æã«èªåçã«å°åºãããŸãã
ç°å¢ãããŒã¿ã»ããã®ãµã€ãºãããã³ãã®ä»ã®ã³ãã³ã ã©ã€ã³åŒæ° (
`WarmupDecayLR`)ã
ãã¡ãããæ§æå€ã®äžéšãŸãã¯ãã¹ãŠãåŒãç¶ãã§ãèªåã§èšå®ããããšãã§ããŸãã
```json
{
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": 0,
"warmup_max_lr": 0.001,
"warmup_num_steps": 1000
}
}
}
```
ãã ãã[`Trainer`] ã³ãã³ãã©ã€ã³åŒæ°ãš DeepSpeed ãèªåã§åæããããšã«ãªããŸãã
æ§æã
ããšãã°ã`WarmupDecayLR`ã®å Žåã¯ã次ã®ãšã³ããªã䜿çšã§ããŸãã
```json
{
"scheduler": {
"type": "WarmupDecayLR",
"params": {
"last_batch_iteration": -1,
"total_num_steps": "auto",
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto"
}
}
}
```
`total_num_steps`ã`warmup_max_lr`ã`warmup_num_steps`ãããã³ `total_num_steps` ã¯ããŒãæã«èšå®ãããŸãã
<a id='deepspeed-fp32'></a>
### fp32 Precision
Deepspeed ã¯ãå®å
šãª fp32 ãš fp16 ã®æ··å粟床ããµããŒãããŸãã
fp16 æ··å粟床ã䜿çšãããšãå¿
èŠãªã¡ã¢ãªã倧å¹
ã«åæžãããé床ãåäžããããã
䜿çšããŠããã¢ãã«ããã®ãã¬ãŒãã³ã° ã¢ãŒãã§é©åã«åäœããªãå Žåã¯ã䜿çšããªãæ¹ãããã§ããããéåžžãã
ã¢ãã«ã fp16 æ··å粟床ã§äºåãã¬ãŒãã³ã°ãããŠããªãå Žåã«çºçããŸã (ããšãã°ããã㯠bf16 ã§äºåãã¬ãŒãã³ã°ãããå Žåã«ããçºçããŸã)
ã¢ãã«ïŒããã®ãããªã¢ãã«ã§ã¯ããªãŒããŒãããŒãŸãã¯ã¢ã³ããŒãããŒãçºçãã`NaN`æå€±ãçºçããå¯èœæ§ããããŸãããããããªãã®å Žåã¯ã䜿çšããããšæãã§ããã
å®å
šãª fp32 ã¢ãŒããããã©ã«ãã® fp16 æ··å粟床ã¢ãŒããæ¬¡ã®ããã«æç€ºçã«ç¡å¹ã«ããŸãã
```json
{
"fp16": {
"enabled": false,
}
}
```
Ampere ã¢ãŒããã¯ã㣠ããŒã¹ã® GPU ã䜿çšããŠããå Žåãpytorch ããŒãžã§ã³ 1.7 以éã¯èªåçã« ã䜿çšããããã«åãæ¿ãããŸãã
äžéšã®æäœã§ã¯ã¯ããã«å¹çç㪠tf32 圢åŒã䜿çšããŸãããçµæã¯äŸç¶ãšã㊠fp32 ã«ãªããŸãã詳现ãš
ãã³ãããŒã¯ã«ã€ããŠã¯ã[Ampere ããã€ã¹äžã® TensorFloat-32(TF32)](https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices) ãåç
§ããŠãã ãããææžã«ã¯ä»¥äžãå«ãŸããŸã
äœããã®çç±ã§ãã®èªå倿ã䜿çšããããªãå Žåã¯ããã®èªå倿ãç¡å¹ã«ããæ¹æ³ã«ã€ããŠèª¬æããŸãã
ð€ ãã¬ãŒããŒã§ã¯ã`--tf32` ã䜿çšããŠæå¹ã«ãããã`--tf32 0` ãŸã㯠`--no_tf32` ã䜿çšããŠç¡å¹ã«ããããšãã§ããŸããããã©ã«ãã§ã¯ãPyTorch ã®ããã©ã«ãã䜿çšãããŸãã
<a id='deepspeed-amp'></a>
### Automatic Mixed Precision
pytorch ã®ãã㪠AMP ã®æ¹æ³ãŸã㯠apex ã®ãããªæ¹æ³ã§èªåæ··å粟床ã䜿çšã§ããŸãã
### fp16
fp16 (float16) ãèšå®ã㊠pytorch AMP ã®ãããªã¢ãŒããèšå®ããã«ã¯:
```json
{
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
}
}
```
[`Trainer`] ã¯ãã®å€ã«åºã¥ããŠãããèªåçã«æå¹ãŸãã¯ç¡å¹ã«ããŸãã
`args.fp16_backend`ãæ®ãã®èšå®å€ã¯ããªã次第ã§ãã
ãã®ã¢ãŒãã¯ã`--fp16 --fp16_backend amp`ãŸãã¯`--fp16_full_eval`ã³ãã³ãã©ã€ã³åŒæ°ãæž¡ããããšæå¹ã«ãªããŸãã
ãã®ã¢ãŒããæç€ºçã«æå¹/ç¡å¹ã«ããããšãã§ããŸãã
```json
{
"fp16": {
"enabled": true,
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
}
}
```
ãã ãã[`Trainer`] ã³ãã³ãã©ã€ã³åŒæ°ãš DeepSpeed ãèªåã§åæããããšã«ãªããŸãã
æ§æã
ããã[ããã¥ã¡ã³ã](https://www.deepspeed.ai/docs/config-json/#fp16-training-options)ã§ãã
### BF16
fp16 ã®ä»£ããã« bf16 (bfloat16) ãå¿
èŠãªå Žåã¯ãæ¬¡ã®æ§æã»ã¯ã·ã§ã³ã䜿çšãããŸãã
```json
{
"bf16": {
"enabled": "auto"
}
}
```
bf16 㯠fp32 ãšåããã€ããã㯠ã¬ã³ãžãåããŠãããããæå€±ã¹ã±ãŒãªã³ã°ã¯å¿
èŠãããŸããã
ãã®ã¢ãŒãã¯ã`--bf16` ãŸã㯠`--bf16_full_eval` ã³ãã³ãã©ã€ã³åŒæ°ãæž¡ããããšæå¹ã«ãªããŸãã
ãã®ã¢ãŒããæç€ºçã«æå¹/ç¡å¹ã«ããããšãã§ããŸãã
```json
{
"bf16": {
"enabled": true
}
}
```
<Tip>
`deepspeed==0.6.0`ã®æç¹ã§ã¯ãbf16 ãµããŒãã¯æ°ããå®éšçãªãã®ã§ãã
bf16 ãæå¹ãªç¶æ
ã§ [åŸé
环ç©](#gradient-accumulation) ã䜿çšããå Žåã¯ãbf16 ã§åŸé
ã环ç©ãããããšã«æ³šæããå¿
èŠããããŸãããã®åœ¢åŒã®ç²ŸåºŠãäœããããããã¯åžæã©ããã§ã¯ãªãå¯èœæ§ããããŸããæå€±ã®ããèç©ã«ã€ãªãããŸãã
ãã®åé¡ãä¿®æ£ããããé«ç²ŸåºŠã® `dtype` (fp16 ãŸã㯠fp32) ã䜿çšãããªãã·ã§ã³ãæäŸããããã®äœæ¥ãè¡ãããŠããŸãã
</Tip>
### NCCL Collectives
èšç·Žäœå¶ã®`dtype`ããããããŸããŸãªåæžãåé/忣æäœãªã©ã®ã³ãã¥ãã±ãŒã·ã§ã³éåäœã«äœ¿çšãããå¥ã®`dtype`ããããŸãã
ãã¹ãŠã®åé/忣æäœã¯ãããŒã¿ãå«ãŸããŠããã®ãšåã `dtype` ã§å®è¡ããããããbf16 ãã¬ãŒãã³ã°äœå¶ã䜿çšããŠããå ŽåãããŒã¿ã¯ bf16 ã§åéãããŸããåéã¯æå€±ã®ãªãæäœã§ãã
ããŸããŸãªãªãã¥ãŒã¹æäœã¯éåžžã«æå€±ã倧ããå¯èœæ§ããããŸããããšãã°ãè€æ°ã® GPU éã§åŸé
ãå¹³ååãããå Žåãéä¿¡ã fp16 ãŸã㯠bf16 ã§è¡ãããå Žåãçµæã¯æå€±ãå€ããªãå¯èœæ§ããããŸããè€æ°ã®æ°å€ãäœç²ŸåºŠã§ã¢ããã¿ã€ãºãããšçµæã¯æ£ç¢ºã§ã¯ãªãããã§ãã ã bf16 ã§ã¯ fp16 ããã粟床ãäœããããããã«ããã§ããéåžžã¯éåžžã«å°ãã grad ãå¹³åããéã®æå€±ãæå°éã«æãããããããfp16 ã§ååã§ããããšããããããŸãããããã£ãŠãããã©ã«ãã§ã¯ãå粟床ãã¬ãŒãã³ã°ã§ã¯ fp16 ããªãã¯ã·ã§ã³æŒç®ã®ããã©ã«ããšããŠäœ¿çšãããŸãããã ãããã®æ©èœãå®å
šã«å¶åŸ¡ã§ããå¿
èŠã«å¿ããŠå°ããªãªãŒããŒãããã远å ããŠããªãã¯ã·ã§ã³ãçŽ¯ç© dtype ãšã㊠fp32 ã䜿çšããçµæã®æºåãã§ããå Žåã«ã®ã¿å粟床 `dtype` ã«ããŠã³ãã£ã¹ãããããã«ããããšãã§ããŸããã§ãã¬ãŒãã³ã°äžã§ãã
ããã©ã«ãããªãŒããŒã©ã€ãããã«ã¯ãæ°ããæ§æãšã³ããªã远å ããã ãã§ãã
```json
{
"communication_data_type": "fp32"
}
```
ãã®èšäºã®å·çæç¹ã§ã®æå¹ãªå€ã¯ã"fp16"ã"bfp16"ã"fp32"ã§ãã
泚: ã¹ããŒãž ãŒã 3 ã«ã¯ãbf16 éä¿¡ã¿ã€ãã«é¢ãããã°ãããã`deepspeed==0.8.1`ã§ä¿®æ£ãããŸããã
### apex
apex AMP ã®ãããªã¢ãŒã ã»ãããèšå®ããã«ã¯:
```json
"amp": {
"enabled": "auto",
"opt_level": "auto"
}
```
[`Trainer`] 㯠`args.fp16_backend` ã®å€ã«åºã¥ããŠèªåçã«èšå®ããŸãã
`args.fp16_opt_level`ã
ãã®ã¢ãŒãã¯ã`--fp16 --fp16_backend apex --fp16_opt_level 01`ã³ãã³ã ã©ã€ã³åŒæ°ãæž¡ããããšæå¹ã«ãªããŸãã
ãã®ã¢ãŒããæç€ºçã«æ§æããããšãã§ããŸãã
```json
{
"amp": {
"enabled": true,
"opt_level": "O1"
}
}
```
ãã ãã[`Trainer`] ã³ãã³ãã©ã€ã³åŒæ°ãš DeepSpeed ãèªåã§åæããããšã«ãªããŸãã
æ§æã
ããã¯[ããã¥ã¡ã³ã](https://www.deepspeed.ai/docs/config-json/#automatic-mixed-precision-amp-training-options)ã§ãã
<a id='deepspeed-bs'></a>
### Batch Size
ããããµã€ãºãèšå®ããã«ã¯ã次ã䜿çšããŸãã
```json
{
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto"
}
```
[`Trainer`] ã¯èªåçã« `train_micro_batch_size_per_gpu` ãæ¬¡ã®å€ã«èšå®ããŸãã
`args.per_device_train_batch_size`ãš`train_batch_size`ã`args.world_size * args.per_device_train_batch_size * args.gradient_accumulation_steps`ã«å€æŽããŸãã
å€ãæç€ºçã«èšå®ããããšãã§ããŸãã
```json
{
"train_batch_size": 12,
"train_micro_batch_size_per_gpu": 4
}
```
ãã ãã[`Trainer`] ã³ãã³ãã©ã€ã³åŒæ°ãš DeepSpeed ãèªåã§åæããããšã«ãªããŸãã
æ§æã
<a id='deepspeed-grad-acc'></a>
### Gradient Accumulation
åŸé
环ç©ã»ãããæ§æããã«ã¯:
```json
{
"gradient_accumulation_steps": "auto"
}
```
[`Trainer`] ã¯èªåçã«ããã `args.gradient_accumulation_steps` ã®å€ã«èšå®ããŸãã
å€ãæç€ºçã«èšå®ããããšãã§ããŸãã
```json
{
"gradient_accumulation_steps": 3
}
```
ãã ãã[`Trainer`] ã³ãã³ãã©ã€ã³åŒæ°ãš DeepSpeed ãèªåã§åæããããšã«ãªããŸãã
æ§æã
<a id='deepspeed-grad-clip'></a>
### Gradient Clipping
ã°ã©ããŒã·ã§ã³ ã°ã©ããŒã·ã§ã³ ã¯ãªããã³ã° ã»ãããæ§æããã«ã¯:
```json
{
"gradient_clipping": "auto"
}
```
[`Trainer`] ã¯èªåçã«ããã `args.max_grad_norm` ã®å€ã«èšå®ããŸãã
å€ãæç€ºçã«èšå®ããããšãã§ããŸãã
```json
{
"gradient_clipping": 1.0
}
```
ãã ãã[`Trainer`] ã³ãã³ãã©ã€ã³åŒæ°ãš DeepSpeed ãèªåã§åæããããšã«ãªããŸãã
æ§æã
<a id='deepspeed-weight-extraction'></a>
### Getting The Model Weights Out
ãã¬ãŒãã³ã°ãç¶ç¶ããDeepSpeed ã®äœ¿çšãåéããéããäœãå¿é
ããå¿
èŠã¯ãããŸããã DeepSpeed ã¹ãã¢
fp32 ã®ã«ã¹ã¿ã ãã§ãã¯ãã€ã³ã ãªããã£ãã€ã¶ãŒ ãã¡ã€ã«å
ã®ãã¹ã¿ãŒã®éã¿ããã㯠`global_step*/*optim_states.pt` (ãã㯠glob
ãã¿ãŒã³)ãéåžžã®ãã§ãã¯ãã€ã³ãã®äžã«ä¿åãããŸãã
**FP16 ãŠã§ã€ã:**
ã¢ãã«ã ZeRO-2 ã§ä¿åãããšãã¢ãã«ã®éã¿ãå«ãéåžžã® `pytorch_model.bin` ãã¡ã€ã«ãäœæãããŸããã
ãããã¯éã¿ã® fp16 ããŒãžã§ã³ã«ãããŸããã
ZeRO-3 ã§ã¯ãã¢ãã«ã®éã¿ãè€æ°ã® GPU ã«åå²ããããããç¶æ³ã¯ããã«è€éã«ãªããŸãã
ãããã£ãŠãfp16 ãä¿åããããã® `Trainer` ãååŸããã«ã¯ã`"stage3_gather_16bit_weights_on_model_save": true` ãå¿
èŠã§ãã
éã¿ã®ããŒãžã§ã³ããã®èšå®ã`False`ã®å Žåã`pytorch_model.bin`ã¯äœæãããŸãããããã¯ãããã©ã«ãã§ DeepSpeed ã® `state_dict` ã«å®éã®éã¿ã§ã¯ãªããã¬ãŒã¹ãã«ããŒãå«ãŸããããã§ãããã® `state_dict` ãä¿åããå ŽåãããŒããçŽãããšã¯ã§ããŸããã
```json
{
"zero_optimization": {
"stage3_gather_16bit_weights_on_model_save": true
}
}
```
**FP32 éé:**
fp16 ãŠã§ã€ãã¯ãã¬ãŒãã³ã°ãåéããã®ã«é©ããŠããŸãããã¢ãã«ã®åŸ®èª¿æŽãå®äºããããã
[ã¢ãã« ãã](https://huggingface.co/models) ã«ã¢ã¯ã»ã¹ããããfp32 ãå
¥æããããšæãããä»ã®äººã«æž¡ããŸãã
éã¿ãããã¯å€§éã®ã¡ã¢ãªãå¿
èŠãšããããã»ã¹ã§ããããããã¬ãŒãã³ã°äžã«è¡ãã¹ãã§ã¯ãªãã®ãçæ³çã§ãã
ãããã£ãŠããã¬ãŒãã³ã°ã®å®äºåŸã«ãªãã©ã€ã³ã§å®è¡ããã®ãæé©ã§ãããã ããå¿
èŠã«å¿ããŠã空ã CPU ãååã«ããå Žåã¯ã
åããã¬ãŒãã³ã° ã¹ã¯ãªããã§å®è¡ã§ããããšãæãåºããŠãã ãããæ¬¡ã®ã»ã¯ã·ã§ã³ã§ã¯ãäž¡æ¹ã®ã¢ãããŒãã«ã€ããŠèª¬æããŸãã
**ã©ã€ã FP32 ãŠã§ã€ã ãªã«ããª:**
ã¢ãã«ã倧ããããã¬ãŒãã³ã°ã®çµäºæã«ç©ºã CPU ã¡ã¢ãªãã»ãšãã©æ®ã£ãŠããªãå Žåããã®ã¢ãããŒãã¯æ©èœããªãå¯èœæ§ããããŸãã
å°ãªããšã 1 ã€ã®ãã§ãã¯ãã€ã³ããä¿åããŠããŠãææ°ã®ãã§ãã¯ãã€ã³ãã䜿çšãããå Žåã¯ãæ¬¡ã®æé ãå®è¡ã§ããŸãã
```python
from transformers.trainer_utils import get_last_checkpoint
from deepspeed.utils.zero_to_fp32 import load_state_dict_from_zero_checkpoint
checkpoint_dir = get_last_checkpoint(trainer.args.output_dir)
fp32_model = load_state_dict_from_zero_checkpoint(trainer.model, checkpoint_dir)
```
`--load_best_model_at_end` class:*~transformers.TrainingArguments* åŒæ°ã䜿çšããŠããå Žå (æé©ãªã¢ãã«ã远跡ãããã)
ãã§ãã¯ãã€ã³ã)ãæåã«æçµã¢ãã«ãæç€ºçã«ä¿åããŠãããäžèšãšåãããšãè¡ãããšã§ãã¬ãŒãã³ã°ãçµäºã§ããŸãã
```python
from deepspeed.utils.zero_to_fp32 import load_state_dict_from_zero_checkpoint
checkpoint_dir = os.path.join(trainer.args.output_dir, "checkpoint-final")
trainer.deepspeed.save_checkpoint(checkpoint_dir)
fp32_model = load_state_dict_from_zero_checkpoint(trainer.model, checkpoint_dir)
```
<Tip>
`load_state_dict_from_zero_checkpoint` ãå®è¡ããããšã`model` ã¯ãã¯ã䜿çšã§ããªããªãããšã«æ³šæããŠãã ããã
åãã¢ããªã±ãŒã·ã§ã³ã® DeepSpeed ã³ã³ããã¹ããã€ãŸããdeepspeed ãšã³ãžã³ãååæåããå¿
èŠããããŸãã
`model.load_state_dict(state_dict)` ã¯ãããããã¹ãŠã® DeepSpeed ããžãã¯ãåé€ããŸãããããã£ãŠãããã¯æåŸã«ã®ã¿å®è¡ããŠãã ãã
ãã¬ãŒãã³ã°ã®æ§åã
</Tip>
ãã¡ãããclass:*~transformers.Trainer* ã䜿çšããå¿
èŠã¯ãªããäžèšã®äŸãç¬èªã®ãã®ã«èª¿æŽããããšãã§ããŸãã
ãã¬ãŒããŒã
äœããã®çç±ã§ããã«æ¹è¯ãããå Žåã¯ãéã¿ã® fp32 `state_dict` ãæœåºããŠé©çšããããšãã§ããŸãã
次ã®äŸã«ç€ºãããã«ããããã¯èªåã§äœæããŸãã
```python
from deepspeed.utils.zero_to_fp32 import get_fp32_state_dict_from_zero_checkpoint
state_dict = get_fp32_state_dict_from_zero_checkpoint(checkpoint_dir) # already on cpu
model = model.cpu()
model.load_state_dict(state_dict)
```
**ãªãã©ã€ã³ FP32 ãŠã§ã€ã ãªã«ããª:**
DeepSpeed ã¯ç¹å¥ãªå€æã¹ã¯ãªãã`zero_to_fp32.py`ãäœæãããã§ãã¯ãã€ã³ãã®æäžäœã«é
眮ããŸãã
ãã©ã«ãããã®ã¹ã¯ãªããã䜿çšãããšããã€ã§ãéã¿ãæœåºã§ããŸããã¹ã¯ãªããã¯ã¹ã¿ã³ãã¢ãã³ãªã®ã§ãããå¿
èŠãããŸããã
æœåºãè¡ãããã®èšå®ãã¡ã€ã«ãŸã㯠`Trainer` ãå¿
èŠã§ãã
ãã§ãã¯ãã€ã³ã ãã©ã«ããŒã次ã®ããã«ãªã£ãŠãããšããŸãã
```bash
$ ls -l output_dir/checkpoint-1/
-rw-rw-r-- 1 stas stas 1.4K Mar 27 20:42 config.json
drwxrwxr-x 2 stas stas 4.0K Mar 25 19:52 global_step1/
-rw-rw-r-- 1 stas stas 12 Mar 27 13:16 latest
-rw-rw-r-- 1 stas stas 827K Mar 27 20:42 optimizer.pt
-rw-rw-r-- 1 stas stas 231M Mar 27 20:42 pytorch_model.bin
-rw-rw-r-- 1 stas stas 623 Mar 27 20:42 scheduler.pt
-rw-rw-r-- 1 stas stas 1.8K Mar 27 20:42 special_tokens_map.json
-rw-rw-r-- 1 stas stas 774K Mar 27 20:42 spiece.model
-rw-rw-r-- 1 stas stas 1.9K Mar 27 20:42 tokenizer_config.json
-rw-rw-r-- 1 stas stas 339 Mar 27 20:42 trainer_state.json
-rw-rw-r-- 1 stas stas 2.3K Mar 27 20:42 training_args.bin
-rwxrw-r-- 1 stas stas 5.5K Mar 27 13:16 zero_to_fp32.py*
```
ãã®äŸã§ã¯ãDeepSpeed ãã§ãã¯ãã€ã³ã ãµããã©ã«ã㌠*global_step1* ã 1 ã€ã ããããŸãããããã£ãŠãFP32ãåæ§ç¯ããã«ã¯
éã¿ãå®è¡ããã ãã§ã:
```bash
python zero_to_fp32.py . pytorch_model.bin
```
ããã ãã `pytorch_model.bin`ã«ã¯ãè€æ°ã® GPU ããçµ±åãããå®å
šãª fp32 ã¢ãã«ã®éã¿ãå«ãŸããããã«ãªããŸãã
ã¹ã¯ãªããã¯ãZeRO-2 ãŸã㯠ZeRO-3 ãã§ãã¯ãã€ã³ããèªåçã«åŠçã§ããããã«ãªããŸãã
`python zero_to_fp32.py -h` ãå®è¡ãããšãäœ¿çšæ¹æ³ã®è©³çްã衚瀺ãããŸãã
ã¹ã¯ãªããã¯ããã¡ã€ã«`latest`ã®å
容ã䜿çšã㊠deepspeed ãµããã©ã«ããŒãèªåæ€åºããŸãã
äŸã«ã¯`global_step1`ãå«ãŸããŸãã
泚: çŸåšãã¹ã¯ãªããã«ã¯æçµç㪠fp32 ã¢ãã«ã®éã¿ã® 2 åã®äžè¬ RAM ãå¿
èŠã§ãã
### ZeRO-3 ãš Infinity Nuances
ZeRO-3 ã¯ããã©ã¡ãŒã¿ ã·ã£ãŒãã£ã³ã°æ©èœã®ç¹ã§ ZeRO-2 ãšã¯å€§ããç°ãªããŸãã
ZeRO-Infinity 㯠ZeRO-3 ãããã«æ¡åŒµããNVMe ã¡ã¢ãªããã®ä»ã®è€æ°ã®é床ãšã¹ã±ãŒã©ããªãã£ã®åäžããµããŒãããŸãã
ã¢ãã«ã«ç¹å¥ãªå€æŽãå ããå¿
èŠããªããŠãæ£åžžã«åäœããããã«ããããåªåãæãããŠããŸããããç¹å®ã®ç¹ã§ã¯
ç¶æ³ã«ãã£ãŠã¯ãæ¬¡ã®æ
å ±ãå¿
èŠã«ãªãå ŽåããããŸãã
#### Constructing Massive Models
DeepSpeed/ZeRO-3 ã¯ãæ¢åã® RAM ã«åãŸããªãå¯èœæ§ã®ããæ°å
ã®ãã©ã¡ãŒã¿ãæã€ã¢ãã«ãåŠçã§ããŸãããã®ãããªå Žåã
ãŸããåæåãããé«éã«å®è¡ãããå Žåã¯ã*deepspeed.zero.Init()* ã䜿çšããŠã¢ãã«ãåæåããŸãã
ã³ã³ããã¹ã ãããŒãžã£ãŒ (颿°ãã³ã¬ãŒã¿ãŒã§ããããŸã)ãæ¬¡ã®ããã«ãªããŸãã
```python
from transformers import T5ForConditionalGeneration, T5Config
import deepspeed
with deepspeed.zero.Init():
config = T5Config.from_pretrained("t5-small")
model = T5ForConditionalGeneration(config)
```
ã芧ã®ãšãããããã«ããã©ã³ãã ã«åæåãããã¢ãã«ãåŸãããŸãã
äºåãã¬ãŒãã³ã°ãããã¢ãã«ã䜿çšãããå Žåã`model_class.from_pretrained` ã¯æ¬¡ã®æ¡ä»¶ãæºããéããã®æ©èœãæå¹ã«ããŸãã
`is_deepspeed_zero3_enabled()` 㯠`True` ãè¿ããŸããããã¯çŸåšã
[`TrainingArguments`] ãªããžã§ã¯ã (æž¡ããã DeepSpeed æ§æãã¡ã€ã«ã« ZeRO-3 æ§æãå«ãŸããŠããå Žå)
ã»ã¯ã·ã§ã³ããããã£ãŠãåŒã³åºãã®åã«** [`TrainingArguments`] ãªããžã§ã¯ããäœæããå¿
èŠããããŸãã
`from_pretrained`ãèããããã·ãŒã±ã³ã¹ã®äŸã次ã«ç€ºããŸãã
```python
from transformers import AutoModel, Trainer, TrainingArguments
training_args = TrainingArguments(..., deepspeed=ds_config)
model = AutoModel.from_pretrained("t5-small")
trainer = Trainer(model=model, args=training_args, ...)
```
å
¬åŒã®ãµã³ãã« ã¹ã¯ãªããã䜿çšããŠããŠãã³ãã³ã ã©ã€ã³åŒæ°ã« `--deepspeed ds_config.json` ãå«ãŸããŠããå Žå
ZeRO-3 èšå®ãæå¹ã«ãããšãããããµã³ãã« ã¹ã¯ãªããã®èšè¿°æ¹æ³ã§ããããããã¹ãŠããã§ã«å®äºããŠããŸãã
泚: ã¢ãã«ã® fp16 éã¿ãåäžã® GPU ã®ã¡ã¢ãªã«åãŸããªãå Žåã¯ããã®æ©èœã䜿çšããå¿
èŠããããŸãã
ãã®æ¹æ³ãšãã®ä»ã®é¢é£æ©èœã®è©³çްã«ã€ããŠã¯ã[å€§èŠæš¡ã¢ãã«ã®æ§ç¯](https://deepspeed.readthedocs.io/en/latest/zero3.html#constructing-massive-models) ãåç
§ããŠãã ããã
ãŸããfp16 ã§äºåèšç·Žãããã¢ãã«ãããŒããããšãã¯ã`from_pretrained` ã«äœ¿çšããããã«æç€ºããå¿
èŠããããŸãã
`torch_dtype=torch.float16`ã詳现ã«ã€ããŠã¯ã[from_pretrained-torch-dtype](#from_pretrained-torch-dtype) ãåç
§ããŠãã ããã
#### Gathering Parameters
è€æ°ã® GPU äžã® ZeRO-3 ã§ã¯ãçŸåšã® GPU ã®ãã©ã¡ãŒã¿ã§ãªãéããåäžã® GPU ããã¹ãŠã®ãã©ã¡ãŒã¿ãæã€ããšã¯ãããŸããã
å®è¡å±€ããããã£ãŠããã¹ãŠã®ã¬ã€ã€ãŒã®ãã¹ãŠã®ãã©ã¡ãŒã¿ãŒã«äžåºŠã«ã¢ã¯ã»ã¹ããå¿
èŠãããå Žåã¯ããããè¡ãããã®ç¹å®ã®æ¹æ³ããããŸãã
ã»ãšãã©ã®å Žåã¯å¿
èŠãããŸããããå¿
èŠãªå Žåã¯ã[ãã©ã¡ãŒã¿ã®åé](https://deepspeed.readthedocs.io/en/latest/zero3.html#manual-parameter-coordination) ãåç
§ããŠãã ããã
ãã ããããã€ãã®å Žæã§å
éšçã«äœ¿çšããŠããŸãããã®äŸã® 1 ã€ã¯ãäºåãã¬ãŒãã³ã°ãããã¢ãã«ã®éã¿ãããŒããããšãã§ãã
`from_pretrained`ãäžåºŠã« 1 ã€ã®ã¬ã€ã€ãŒãããŒãããåå ããŠãããã¹ãŠã® GPU ã«å³åº§ã«åå²ããŸãã
å€§èŠæš¡ãªã¢ãã«ã§ã¯ãã¡ã¢ãªã®é¢ä¿ã§ã1 ã€ã® GPU ã«ããŒãããŠããè€æ°ã® GPU ã«åæ£ããããšã¯ã§ããŸããã
å¶éã
ãŸããZeRO-3 ã§ã¯ãç¬èªã®ã³ãŒããäœæããæ¬¡ã®ãããªã¢ãã« ãã©ã¡ãŒã¿ãŒã®éã¿ãçºçãããšããŸãã
```python
tensor([1.0], device="cuda:0", dtype=torch.float16, requires_grad=True)
```
`tensor([1.])` ã«ã¹ãã¬ã¹ãæããå ŽåããŸãã¯ãã©ã¡ãŒã¿ã®ãµã€ãºã `1` ã§ãããšãããšã©ãŒãçºçããå Žå
ãã倧ããªå€æ¬¡å
圢ç¶ãããã¯ããã©ã¡ãŒã¿ãŒãåå²ãããŠããã衚瀺ãããã®ã¯ ZeRO-3 ãã¬ãŒã¹ãã«ããŒã§ããããšãæå³ããŸãã
<a id='deepspeed-zero-inference'></a>
### ZeRO Inference
ZeRO Inference ã¯ãZeRO-3 Training ãšåãæ§æã䜿çšããŸãããªããã£ãã€ã¶ãŒãšã¹ã±ãžã¥ãŒã©ãŒã®ã»ã¯ã·ã§ã³ã¯å¿
èŠãããŸãããã§
å®éãåããã®ããã¬ãŒãã³ã°ãšå
±æãããå Žåã¯ãããããèšå®ãã¡ã€ã«ã«æ®ãããšãã§ããŸãã圌ãã¯ãã ãããªãã ãã
ç¡èŠãããŸããã
ãã以å€ã®å Žåã¯ãéåžžã® [`TrainingArguments`] åŒæ°ãæž¡ãã ãã§ããäŸãã°ïŒ
```bash
deepspeed --num_gpus=2 your_program.py <normal cl args> --do_eval --deepspeed ds_config.json
```
å¯äžéèŠãªããšã¯ãZeRO-2 ã«ã¯äœã®å©ç¹ããªããããZeRO-3 æ§æã䜿çšããå¿
èŠããããšããããšã§ãã
ZeRO-3 ã®ã¿ããã©ã¡ãŒã¿ãŒã®ã·ã£ãŒãã£ã³ã°ãå®è¡ããã®ã«å¯ŸããZeRO-1 ã¯åŸé
ãšãªããã£ãã€ã¶ãŒã®ç¶æ
ãã·ã£ãŒãã£ã³ã°ãããããæšè«ã«åœ¹ç«ã¡ãŸãã
以äžã¯ãå©çšå¯èœãªãã¹ãŠã® GPU ããããã€ãã DeepSpeed ã§`run_translation.py`ãå®è¡ããäŸã§ãã
```bash
deepspeed examples/pytorch/translation/run_translation.py \
--deepspeed tests/deepspeed/ds_config_zero3.json \
--model_name_or_path t5-small --output_dir output_dir \
--do_eval --max_eval_samples 50 --warmup_steps 50 \
--max_source_length 128 --val_max_target_length 128 \
--overwrite_output_dir --per_device_eval_batch_size 4 \
--predict_with_generate --dataset_config "ro-en" --fp16 \
--source_lang en --target_lang ro --dataset_name wmt16 \
--source_prefix "translate English to Romanian: "
```
æšè«ã®ããã«ããªããã£ãã€ã¶ãŒã®ç¶æ
ãšåŸé
ã«ãã£ãŠäœ¿çšããã远å ã®å€§ããªã¡ã¢ãªã¯å¿
èŠãªãããã
ã¯ããã«å€§ããªããããã·ãŒã±ã³ã¹é·ãåãããŒããŠã§ã¢ã«é©åã§ããå¿
èŠããããŸãã
ããã«ãDeepSpeed ã¯çŸåšãDeepspeed-Inference ãšåŒã°ããé¢é£è£œåãéçºããŠããŸããããããšã¯äœã®é¢ä¿ããããŸããã
ZeRO ãã¯ãããžãŒã«æºæ ããŠããŸããã代ããã«ãã³ãœã«äžŠååŠçã䜿çšããŠãåäžã® GPU ã«åãŸããªãã¢ãã«ãã¹ã±ãŒãªã³ã°ããŸããããã¯
çŸåšéçºäžã§ãã補åã宿ãããçµ±åãæäŸããäºå®ã§ãã
### Memory Requirements
Deepspeed ZeRO ã¯ã¡ã¢ãªã CPU (ããã³ NVMe) ã«ãªãããŒãã§ããããããã¬ãŒã ã¯ãŒã¯ã¯ã䜿çšãããŠãã GPU ã®æ°ã«å¿ããŠå¿
èŠãª CPU ããã³ GPU ã¡ã¢ãªã®éãç¥ãããšãã§ãããŠãŒãã£ãªãã£ãæäŸããŸãã
åäžã® GPU ã§ `bigscience/T0_3B`ã埮調æŽããããã«å¿
èŠãªã¡ã¢ãªã®éãèŠç©ãã£ãŠã¿ãŸãããã
```bash
$ python -c 'from transformers import AutoModel; \
from deepspeed.runtime.zero.stage3 import estimate_zero3_model_states_mem_needs_all_live; \
model = AutoModel.from_pretrained("bigscience/T0_3B"); \
estimate_zero3_model_states_mem_needs_all_live(model, num_gpus_per_node=1, num_nodes=1)'
[...]
Estimated memory needed for params, optim states and gradients for a:
HW: Setup with 1 node, 1 GPU per node.
SW: Model with 2783M total params, 65M largest layer params.
per CPU | per GPU | Options
70.00GB | 0.25GB | offload_param=cpu , offload_optimizer=cpu , zero_init=1
70.00GB | 0.25GB | offload_param=cpu , offload_optimizer=cpu , zero_init=0
62.23GB | 5.43GB | offload_param=none, offload_optimizer=cpu , zero_init=1
62.23GB | 5.43GB | offload_param=none, offload_optimizer=cpu , zero_init=0
0.37GB | 46.91GB | offload_param=none, offload_optimizer=none, zero_init=1
15.56GB | 46.91GB | offload_param=none, offload_optimizer=none, zero_init=0
```
ãããã£ãŠãåäžã® 80 GB GPU ã§ CPU ãªãããŒããªãã§æèŒããããšããå°ã㪠8 GB GPU ã§ãæå€§ 60 GB ã® CPU ã¡ã¢ãªãå¿
èŠã«ãªãããšãå¯èœã§ãã (ããã¯ãã©ã¡ãŒã¿ããªããã£ãã€ã¶ã®ç¶æ
ãããã³åŸé
ã®ããã®ã¡ã¢ãªã§ããããšã«æ³šæããŠãã ãããcuda ã«ãŒãã«ãã¢ã¯ãã£ããŒã·ã§ã³ãããã³äžæã¡ã¢ãªã«ã¯ããå°ãå€ãã®ã¡ã¢ãªãå¿
èŠã§ãã)
次ã«ãã³ã¹ããšé床ã®ãã¬ãŒããªãã«ãªããŸããããå°ãã GPU ã賌å
¥ãŸãã¯ã¬ã³ã¿ã«ããæ¹ãå®ããªããŸã (Deepspeed ZeRO ã§ã¯è€æ°ã® GPU ã䜿çšã§ãããããGPU ã®æ°ãæžããããšãã§ããŸã)ããããããã®å Žåã¯é
ããªããŸãããã®ãããäœããå®è¡ããéåºŠãæ°ã«ããªããŠããé床ã®äœäžã¯ GPU ã®äœ¿çšæéã«çŽæ¥åœ±é¿ããã³ã¹ããå¢å€§ãããããã©ããæã广çããå®éšããŠæ¯èŒããŠãã ããã
åå㪠GPU ã¡ã¢ãªãããå Žåã¯ããã¹ãŠãé«éã«ãªããããCPU/NVMe ãªãããŒããå¿
ãç¡å¹ã«ããŠãã ããã
ããšãã°ã2 ã€ã® GPU ã«å¯ŸããŠåãããšãç¹°ãè¿ããŠã¿ãŸãããã
```bash
$ python -c 'from transformers import AutoModel; \
from deepspeed.runtime.zero.stage3 import estimate_zero3_model_states_mem_needs_all_live; \
model = AutoModel.from_pretrained("bigscience/T0_3B"); \
estimate_zero3_model_states_mem_needs_all_live(model, num_gpus_per_node=2, num_nodes=1)'
[...]
Estimated memory needed for params, optim states and gradients for a:
HW: Setup with 1 node, 2 GPUs per node.
SW: Model with 2783M total params, 65M largest layer params.
per CPU | per GPU | Options
70.00GB | 0.25GB | offload_param=cpu , offload_optimizer=cpu , zero_init=1
70.00GB | 0.25GB | offload_param=cpu , offload_optimizer=cpu , zero_init=0
62.23GB | 2.84GB | offload_param=none, offload_optimizer=cpu , zero_init=1
62.23GB | 2.84GB | offload_param=none, offload_optimizer=cpu , zero_init=0
0.74GB | 23.58GB | offload_param=none, offload_optimizer=none, zero_init=1
31.11GB | 23.58GB | offload_param=none, offload_optimizer=none, zero_init=0
```
ãããã£ãŠãããã§ã¯ãCPU ã«ãªãããŒãããã« 2x 32GB 以äžã® GPU ãå¿
èŠã«ãªããŸãã
詳现ã«ã€ããŠã¯ã[ã¡ã¢ãªæšå®ããŒã«](https://deepspeed.readthedocs.io/en/latest/memory.html) ãåç
§ããŠãã ããã
### Filing Issues
ããã§ã¯ãåé¡ã®ççžãããã«è§£æããäœæ¥ã®ãããã¯ãè§£é€ã§ãããããåé¡ãå ±åããæ¹æ³ã説æããŸãã
ã¬ããŒãã«ã¯å¿
ãæ¬¡ã®å
容ãå«ããŠãã ããã
1. ã¬ããŒãå
ã®å®å
šãª Deepspeed æ§æãã¡ã€ã«
2. [`Trainer`] ã䜿çšããŠããå Žåã¯ã³ãã³ãã©ã€ã³åŒæ°ããŸãã¯
ãã¬ãŒããŒã®ã»ããã¢ãããèªåã§ã¹ã¯ãªããäœæããŠããå Žåã¯ã[`TrainingArguments`] åŒæ°ãããªãã§ãã ãã
[`TrainingArguments`] ã«ã¯ç¡é¢ä¿ãªãšã³ããªã倿°å«ãŸããŠããããããã³ãããŸãã
3. 次ã®åºå:
```bash
python -c 'import torch; print(f"torch: {torch.__version__}")'
python -c 'import transformers; print(f"transformers: {transformers.__version__}")'
python -c 'import deepspeed; print(f"deepspeed: {deepspeed.__version__}")'
```
4. å¯èœã§ããã°ãåé¡ãåçŸã§ãã Google Colab ããŒãããã¯ãžã®ãªã³ã¯ãå«ããŠãã ãããããã䜿ããŸã
[ããŒãããã¯](https://github.com/stas00/porting/blob/master/transformers/deepspeed/DeepSpeed_on_colab_CLI.ipynb) ãšããŠ
åºçºç¹ã
5. äžå¯èœã§ãªãéããã«ã¹ã¿ã ããŒã¿ã»ããã§ã¯ãªããåžžã«äœ¿çšã§ããæšæºããŒã¿ã»ããã䜿çšããŠãã ããã
6. å¯èœã§ããã°ãæ¢åã® [ãµã³ãã«](https://github.com/huggingface/transformers/tree/main/examples/pytorch) ã®ããããã䜿çšããŠåé¡ãåçŸããŠã¿ãŠãã ããã
- Deepspeed ãåé¡ã®åå ã§ã¯ãªãããšããããããŸãã
æåºãããåé¡ã®äžéšã¯ãDeepspeed ãšã¯ç¡é¢ä¿ã§ããããšã倿ããŸãããããã¯ãDeepspeed ãã»ããã¢ããããåé€ãããåŸã§ãã
åé¡ã¯ãŸã æ®ã£ãŠããã
ãããã£ãŠãå®å
šã«æçœã§ãªãå Žåã¯ãDeepSpeed é¢é£ã®åé¡ã§ãã
äŸå€ãçºçããDeepSpeed ã¢ãžã¥ãŒã«ãé¢ä¿ããŠããããšãããããŸãããŸããDeepSpeed ãå«ãŸãªãã»ããã¢ãããåãã¹ãããŠãã ããã
åé¡ã解決ããªãå Žåã«ã®ã¿ãDeepspeed ã«ã€ããŠèšåããå¿
èŠãªè©³çްããã¹ãŠæäŸããŠãã ããã
- åé¡ãçµ±åéšåã§ã¯ãªã DeepSpeed ã³ã¢ã«ããããšãæãããªå Žåã¯ãåé¡ãæåºããŠãã ããã
[Deepspeed](https://github.com/microsoft/DeepSpeed/) ãçŽæ¥äœ¿çšããŸããããããããªãå Žåã§ãããå®å¿ãã ããã
ã©ã¡ãã®åé¡ãã©ãã«ãŒã§ãåé¡ãããŸãããæçš¿ããããããã倿ããæ¬¡ã®å Žåã¯å¥ã®åé¡ãã©ãã«ãŒã«ãªãã€ã¬ã¯ãããŸãã
ããã§ããå¿
èŠãããã
### Troubleshooting
#### the `deepspeed` process gets killed at startup without a traceback
`deepspeed`ããã»ã¹ãèµ·åæã«ãã¬ãŒã¹ããã¯ãªãã§åŒ·å¶çµäºãããå Žåãããã¯éåžžãããã°ã©ã ã詊è¡ããããšãæå³ããŸãã
ã·ã¹ãã ãæã£ãŠãããããå€ãã® CPU ã¡ã¢ãªãå²ãåœãŠãããããã»ã¹ãå²ãåœãŠãèš±å¯ãããŠãããããOS ã«ãŒãã«ãããã匷å¶çµäºããŸãã
ããã»ã¹ãããã¯ãèšå®ãã¡ã€ã«ã« `offload_optimizer` ãŸã㯠`offload_param` ãå«ãŸããŠããå¯èœæ§ãé«ãããã§ãã
ã©ã¡ãã`cpu`ã«ãªãããŒãããããã«èšå®ãããŠããŸãã NVMe ã䜿çšããŠããå Žåã¯ã次ã®ç°å¢ã§å®è¡ããŠããå Žå㯠NVMe ãžã®ãªãããŒãã詊ããŠãã ããã
ãŒã-3ã [ç¹å®ã®ã¢ãã«ã«å¿
èŠãªã¡ã¢ãªéãèŠç©ãã]æ¹æ³ã¯æ¬¡ã®ãšããã§ã(https://deepspeed.readthedocs.io/en/latest/memory.html)ã
#### training and/or eval/predict loss is `NaN`
ããã¯ãbf16 æ··å粟床ã¢ãŒãã§äºåãã¬ãŒãã³ã°ãããã¢ãã«ãååŸããããã fp16 (æ··åç²ŸåºŠã®æç¡ã«ããããã) ã§äœ¿çšããããšããå Žåã«ããçºçããŸãã TPU ã§ãã¬ãŒãã³ã°ãããã»ãšãã©ã®ã¢ãã«ãããã³å€ãã®å ŽåãGoogle ã«ãã£ãŠãªãªãŒã¹ãããã¢ãã«ã¯ããã®ã«ããŽãªã«åé¡ãããŸã (ããšãã°ãã»ãŒãã¹ãŠã® t5 ããŒã¹ã®ã¢ãã«)ãããã§ã®è§£æ±ºçã¯ãããŒããŠã§ã¢ããµããŒãããŠããå Žå (TPUãAmpere GPU 以é)ãfp32 ãŸã㯠bf16 ã䜿çšããããšã§ãã
```json
{
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
}
}
```
ãã°ã«ã¯ãDeepspeed ãæ¬¡ã®ããã«`OVERFLOW!`ãå ±åããŠããããšãããããŸãã
```
0%| | 0/189 [00:00<?, ?it/s]
[deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 262144, reducing to 262144
1%|â | 1/189 [00:00<01:26, 2.17it/s]
[deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 262144, reducing to 131072.0
1%|ââ
[...]
[deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1, reducing to 1
14%|âââââââââââââââââ | 27/189 [00:14<01:13, 2.21it/s]
[deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1, reducing to 1
15%|ââââââââââââââââââ | 28/189 [00:14<01:13, 2.18it/s]
[deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1, reducing to 1
15%|ââââââââââââââââââ | 29/189 [00:15<01:13, 2.18it/s]
[deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1, reducing to 1
[...]
```
ããã¯ãDeepspeed æå€±ã¹ã±ãŒã©ãŒãæå€±ãªãŒããŒãããŒãå
æããã¹ã±ãŒãªã³ã°ä¿æ°ãèŠã€ããããªãããšãæå³ããŸãã
(ãã°ã¯ããã§èªã¿ãããããããã«ãããµãŒãžãããŠããŸãã)
ãã®å Žåãé垞㯠`initial_scale_power` ã®å€ãäžããå¿
èŠããããŸããéåžžã`initial_scale_power: 32` ã«èšå®ãããšåé¡ã解決ããŸãã
### Notes
- DeepSpeed 㯠PyTorch [`Trainer`] ã§ã¯åäœããŸãããTF [`TFTrainer`] ã§ã¯åäœããŸããã
- DeepSpeed ã«ã¯ pip ã§ã€ã³ã¹ããŒã«å¯èœãª PyPI ããã±ãŒãžããããŸãããããŒããŠã§ã¢ã«æãé©åããããã«ããŸãæå¹ã«ããå¿
èŠãããå Žåã¯ã[ãœãŒã¹](https://github.com/microsoft/deepspeed#installation) ããã€ã³ã¹ããŒã«ããããšã匷ããå§ãããŸãã
1 ããã Adam ãªã©ã®ç¹å®ã®æ©èœã¯ãpypi ãã£ã¹ããªãã¥ãŒã·ã§ã³ã§ã¯å©çšã§ããŸããã
- ð€ Transformers ã§ DeepSpeed ã䜿çšããããã« [`Trainer`] ã䜿çšããå¿
èŠã¯ãããŸãã - ä»»æã®ã¢ãã«ã䜿çšã§ããŸã
åŸè
㯠[DeepSpeed çµ±åæé ](https://www.deepspeed.ai/getting-started/#writing-deepspeed-models) ã«åŸã£ãŠèª¿æŽããå¿
èŠããããŸãã
## Non-Trainer Deepspeed Integration
[`~integrations.HfDeepSpeedConfig`] ã¯ãDeepspeed ã ð€ Transformers ã³ã¢ã«çµ±åããããã«äœ¿çšãããŸã
[`Trainer`] ã䜿çšããªãå Žåã®æ©èœãå®è¡ããå¯äžã®ããšã¯ãDeepspeed ZeRO-3 ãã©ã¡ãŒã¿åéãåŠçãã`from_pretrained`åŒã³åºãäžã«ã¢ãã«ãè€æ°ã® GPU ã«èªåçã«åå²ããããšã§ãããã以å€ã¯ãã¹ãŠèªåã§è¡ãå¿
èŠããããŸãã
[`Trainer`] ã䜿çšãããšããã¹ãŠãèªåçã«åŠçãããŸãã
[`Trainer`] ã䜿çšããªãå ŽåãDeepSpeed ZeRO-3 ãå¹ççã«å°å
¥ããã«ã¯ã
ã¢ãã«ãã€ã³ã¹ã¿ã³ã¹åããåã« [`~integrations.HfDeepSpeedConfig`] ãªããžã§ã¯ããåé€ãããã®ãªããžã§ã¯ããçãããŸãŸã«ããŸãã
Deepspeed ZeRO-1 ãŸã㯠ZeRO-2 ã䜿çšããŠããå Žåã¯ã`HfDeepSpeedConfig`ã䜿çšããå¿
èŠã¯ãŸã£ãããããŸããã
ããšãã°ãäºåãã¬ãŒãã³ã°ãããã¢ãã«ã®å Žåã¯æ¬¡ã®ããã«ãªããŸãã
```python
from transformers.integrations import HfDeepSpeedConfig
from transformers import AutoModel
import deepspeed
ds_config = {...} # deepspeed config object or path to the file
# must run before instantiating the model to detect zero 3
dschf = HfDeepSpeedConfig(ds_config) # keep this object alive
model = AutoModel.from_pretrained("gpt2")
engine = deepspeed.initialize(model=model, config_params=ds_config, ...)
```
ãŸãã¯ãäºåãã¬ãŒãã³ã°ãããŠããªãã¢ãã«ã®å Žå:
```python
from transformers.integrations import HfDeepSpeedConfig
from transformers import AutoModel, AutoConfig
import deepspeed
ds_config = {...} # deepspeed config object or path to the file
# must run before instantiating the model to detect zero 3
dschf = HfDeepSpeedConfig(ds_config) # keep this object alive
config = AutoConfig.from_pretrained("gpt2")
model = AutoModel.from_config(config)
engine = deepspeed.initialize(model=model, config_params=ds_config, ...)
```
[`Trainer`] çµ±åã䜿çšããŠããªãå Žåã¯ãå®å
šã«ç¬åã§è¡ãããšã«ãªãããšã«æ³šæããŠãã ãããåºæ¬çã«ã¯ã[Deepspeed](https://www.deepspeed.ai/) Web ãµã€ãã®ããã¥ã¡ã³ãã«åŸã£ãŠãã ããããŸããèšå®ãã¡ã€ã«ãæç€ºçã«èšå®ããå¿
èŠããããŸãã`"auto"`å€ã¯äœ¿çšã§ããã代ããã«å®éã®å€ãå
¥åããå¿
èŠããããŸãã
## HfDeepSpeedConfig
[[autodoc]] integrations.HfDeepSpeedConfig
- all
### Custom DeepSpeed ZeRO Inference
以äžã¯ãåäžã® GPU ã«ã¢ãã«ãé©åã§ããªãå Žåã«ã[`Trainer`] ã䜿çšããã« DeepSpeed ZeRO æšè«ãå®è¡ããæ¹æ³ã®äŸã§ãã解決çã«ã¯ã远å ã® GPU ã®äœ¿çšããŸã㯠GPU ã¡ã¢ãªã CPU ã¡ã¢ãªã«ãªãããŒãããããšãå«ãŸããŸãã
ããã§çè§£ãã¹ãéèŠãªãã¥ã¢ã³ã¹ã¯ãZeRO ã®èšè𿹿³ã«ãããç°ãªã GPU ã§ç°ãªãå
¥åã䞊è¡ããŠåŠçã§ãããšããããšã§ãã
ãã®äŸã«ã¯å€§éã®ã¡ã¢ããããèªå·±ææžåãããŠããŸãã
å¿
ãæ¬¡ã®ããšãè¡ã£ãŠãã ããã
1. åå㪠GPU ã¡ã¢ãªãããå Žåã¯ãCPU ãªãããŒããç¡å¹ã«ããŸã (é床ãäœäžãããã)ã
2. Ampere ãŸãã¯æ°ãã GPU ãææããŠããå Žåã¯ãåŠçãé«éåããããã« bf16 ãæå¹ã«ããŸãããã®ããŒããŠã§ã¢ããªãå Žåã¯ãbf16 æ··å粟床ã§äºåãã¬ãŒãã³ã°ãããã¢ãã« (ã»ãšãã©ã® t5 ã¢ãã«ãªã©) ã䜿çšããªãéããfp16 ãæå¹ã«ããããšãã§ããŸãããããã¯éåžžãfp16 ã§ãªãŒããŒãããŒããåºåãšããŠã¬ããŒãžã衚瀺ãããŸãã
```python
#!/usr/bin/env python
# This script demonstrates how to use Deepspeed ZeRO in an inference mode when one can't fit a model
# into a single GPU
#
# 1. Use 1 GPU with CPU offload
# 2. Or use multiple GPUs instead
#
# First you need to install deepspeed: pip install deepspeed
#
# Here we use a 3B "bigscience/T0_3B" model which needs about 15GB GPU RAM - so 1 largish or 2
# small GPUs can handle it. or 1 small GPU and a lot of CPU memory.
#
# To use a larger model like "bigscience/T0" which needs about 50GB, unless you have an 80GB GPU -
# you will need 2-4 gpus. And then you can adapt the script to handle more gpus if you want to
# process multiple inputs at once.
#
# The provided deepspeed config also activates CPU memory offloading, so chances are that if you
# have a lot of available CPU memory and you don't mind a slowdown you should be able to load a
# model that doesn't normally fit into a single GPU. If you have enough GPU memory the program will
# run faster if you don't want offload to CPU - so disable that section then.
#
# To deploy on 1 gpu:
#
# deepspeed --num_gpus 1 t0.py
# or:
# python -m torch.distributed.run --nproc_per_node=1 t0.py
#
# To deploy on 2 gpus:
#
# deepspeed --num_gpus 2 t0.py
# or:
# python -m torch.distributed.run --nproc_per_node=2 t0.py
from transformers import AutoTokenizer, AutoConfig, AutoModelForSeq2SeqLM
from transformers.integrations import HfDeepSpeedConfig
import deepspeed
import os
import torch
os.environ["TOKENIZERS_PARALLELISM"] = "false" # To avoid warnings about parallelism in tokenizers
# distributed setup
local_rank = int(os.getenv("LOCAL_RANK", "0"))
world_size = int(os.getenv("WORLD_SIZE", "1"))
torch.cuda.set_device(local_rank)
deepspeed.init_distributed()
model_name = "bigscience/T0_3B"
config = AutoConfig.from_pretrained(model_name)
model_hidden_size = config.d_model
# batch size has to be divisible by world_size, but can be bigger than world_size
train_batch_size = 1 * world_size
# ds_config notes
#
# - enable bf16 if you use Ampere or higher GPU - this will run in mixed precision and will be
# faster.
#
# - for older GPUs you can enable fp16, but it'll only work for non-bf16 pretrained models - e.g.
# all official t5 models are bf16-pretrained
#
# - set offload_param.device to "none" or completely remove the `offload_param` section if you don't
# - want CPU offload
#
# - if using `offload_param` you can manually finetune stage3_param_persistence_threshold to control
# - which params should remain on gpus - the larger the value the smaller the offload size
#
# For indepth info on Deepspeed config see
# https://huggingface.co/docs/transformers/main/main_classes/deepspeed
# keeping the same format as json for consistency, except it uses lower case for true/false
# fmt: off
ds_config = {
"fp16": {
"enabled": False
},
"bf16": {
"enabled": False
},
"zero_optimization": {
"stage": 3,
"offload_param": {
"device": "cpu",
"pin_memory": True
},
"overlap_comm": True,
"contiguous_gradients": True,
"reduce_bucket_size": model_hidden_size * model_hidden_size,
"stage3_prefetch_bucket_size": 0.9 * model_hidden_size * model_hidden_size,
"stage3_param_persistence_threshold": 10 * model_hidden_size
},
"steps_per_print": 2000,
"train_batch_size": train_batch_size,
"train_micro_batch_size_per_gpu": 1,
"wall_clock_breakdown": False
}
# fmt: on
# next line instructs transformers to partition the model directly over multiple gpus using
# deepspeed.zero.Init when model's `from_pretrained` method is called.
#
# **it has to be run before loading the model AutoModelForSeq2SeqLM.from_pretrained(model_name)**
#
# otherwise the model will first be loaded normally and only partitioned at forward time which is
# less efficient and when there is little CPU RAM may fail
dschf = HfDeepSpeedConfig(ds_config) # keep this object alive
# now a model can be loaded.
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
# initialise Deepspeed ZeRO and store only the engine object
ds_engine = deepspeed.initialize(model=model, config_params=ds_config)[0]
ds_engine.module.eval() # inference
# Deepspeed ZeRO can process unrelated inputs on each GPU. So for 2 gpus you process 2 inputs at once.
# If you use more GPUs adjust for more.
# And of course if you have just one input to process you then need to pass the same string to both gpus
# If you use only one GPU, then you will have only rank 0.
rank = torch.distributed.get_rank()
if rank == 0:
text_in = "Is this review positive or negative? Review: this is the best cast iron skillet you will ever buy"
elif rank == 1:
text_in = "Is this review positive or negative? Review: this is the worst restaurant ever"
tokenizer = AutoTokenizer.from_pretrained(model_name)
inputs = tokenizer.encode(text_in, return_tensors="pt").to(device=local_rank)
with torch.no_grad():
outputs = ds_engine.module.generate(inputs, synced_gpus=True)
text_out = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(f"rank{rank}:\n in={text_in}\n out={text_out}")
```
ããã`t0.py`ãšããŠä¿åããŠå®è¡ããŸãããã
```
$ deepspeed --num_gpus 2 t0.py
rank0:
in=Is this review positive or negative? Review: this is the best cast iron skillet you will ever buy
out=Positive
rank1:
in=Is this review positive or negative? Review: this is the worst restaurant ever
out=negative
```
ããã¯éåžžã«åºæ¬çãªäŸã§ãããããŒãºã«åãããŠèª¿æŽããŠãã ããã
### `generate` nuances
ZeRO Stage-3 ã§è€æ°ã® GPU ã䜿çšããå Žåã`generate(..., synced_gpus=True)`ãåŒã³åºã㊠GPU ãåæããå¿
èŠããããŸãããããè¡ããªããšã1 ã€ã® GPU ãä»ã® GPU ããå
ã«çæãçµäºããå Žåãæ®ãã® GPU ãçæã忢ãã GPU ãããŠã§ã€ãã®ã·ã£ãŒããåä¿¡ã§ããªããªããããã·ã¹ãã å
šäœããã³ã°ããŸãã
`transformers>=4.28` 以éã`synced_gpus` ãæç€ºçã«æå®ãããŠããªãå Žåããããã®æ¡ä»¶ãæ€åºããããšèªåçã« `True` ã«èšå®ãããŸãããã ããå¿
èŠã«å¿ã㊠`synced_gpus` ã®å€ããªãŒããŒã©ã€ãããããšãã§ããŸãã
## Deepspeed çµ±åã®ãã¹ã
DeepSpeed çµ±åãå«ã PR ãéä¿¡ããå Žåã¯ãCircleCI PR CI ã»ããã¢ããã«ã¯ GPU ããªãããšã«æ³šæããŠãã ããããã®ãããGPU ãå¿
èŠãšãããã¹ãã¯å¥ã® CI ã§æ¯æ©ã®ã¿å®è¡ãããŸãããããã£ãŠãPR ã§ç·è²ã® CI ã¬ããŒãã衚瀺ãããŠããDeepSpeed ãã¹ããåæ Œããããšãæå³ããããã§ã¯ãããŸããã
DeepSpeed ãã¹ããå®è¡ããã«ã¯ãå°ãªããšã以äžãå®è¡ããŠãã ããã
```
RUN_SLOW=1 pytest tests/deepspeed/test_deepspeed.py
```
ã¢ããªã³ã°ãŸã㯠pytorch ãµã³ãã« ã³ãŒãã®ããããã倿Žããå Žåã¯ãModel Zoo ãã¹ããå®è¡ããŸãã以äžã¯ãã¹ãŠã® DeepSpeed ãã¹ããå®è¡ããŸãã
```
RUN_SLOW=1 pytest tests/deepspeed
```
## Main DeepSpeed Resources
- [ãããžã§ã¯ãã® github](https://github.com/microsoft/deepspeed)
- [äœ¿çšæ¹æ³ããã¥ã¡ã³ã](https://www.deepspeed.ai/getting-started/)
- [API ããã¥ã¡ã³ã](https://deepspeed.readthedocs.io/en/latest/index.html)
- [ããã°æçš¿](https://www.microsoft.com/en-us/research/search/?q=deepspeed)
è«æ:
- [ZeRO: å
ãã©ã¡ãŒã¿ ã¢ãã«ã®ãã¬ãŒãã³ã°ã«åããã¡ã¢ãªã®æé©å](https://arxiv.org/abs/1910.02054)
- [ZeRO-Offload: 10 åèŠæš¡ã®ã¢ãã« ãã¬ãŒãã³ã°ã®æ°äž»å](https://arxiv.org/abs/2101.06840)
- [ZeRO-Infinity: 極éã¹ã±ãŒã«ã®æ·±å±€åŠç¿ã®ããã® GPU ã¡ã¢ãªã®å£ãæã¡ç Žã](https://arxiv.org/abs/2104.07857)
æåŸã«ãHuggingFace [`Trainer`] 㯠DeepSpeed ã®ã¿ãçµ±åããŠããããšãèŠããŠãããŠãã ããã
DeepSpeed ã®äœ¿çšã«é¢ããŠåé¡ã質åãããå Žåã¯ã[DeepSpeed GitHub](https://github.com/microsoft/DeepSpeed/issues) ã«åé¡ãæåºããŠãã ããã
| 0 |
hf_public_repos/transformers/docs/source/ja | hf_public_repos/transformers/docs/source/ja/main_classes/configuration.md | <!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
ïŒ æ§æ
åºæ¬ã¯ã©ã¹ [`PretrainedConfig`] ã¯ãèšå®ãããŒã/ä¿åããããã®äžè¬çãªã¡ãœãããå®è£
ããŸãã
ããŒã«ã« ãã¡ã€ã«ãŸãã¯ãã£ã¬ã¯ããªããããŸãã¯ã©ã€ãã©ãª (ããŠã³ããŒãããã) ã«ãã£ãŠæäŸãããäºåãã¬ãŒãã³ã°æžã¿ã¢ãã«æ§æãã
HuggingFace ã® AWS S3 ãªããžããªãã)ã
åæŽŸçæ§æã¯ã©ã¹ã¯ã¢ãã«åºæã®å±æ§ãå®è£
ããŸãããã¹ãŠã®æ§æã¯ã©ã¹ã«ååšããå
±éã®å±æ§ã¯æ¬¡ã®ãšããã§ãã
`hidden_ââsize`ã`num_attention_heads`ãããã³ `num_hidden_ââlayers`ãããã¹ã ã¢ãã«ã¯ããã«ä»¥äžãå®è£
ããŸãã
`vocab_size`ã
## PretrainedConfig
[[autodoc]] PretrainedConfig
- push_to_hub
- all
| 0 |
hf_public_repos/transformers/docs/source/ja | hf_public_repos/transformers/docs/source/ja/main_classes/logging.md | <!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Logging
ð€ Transformersã«ã¯ãã©ã€ãã©ãªã®è©³çŽ°åºŠãç°¡åã«èšå®ã§ããäžå€®éäžåã®ãã®ã³ã°ã·ã¹ãã ããããŸãã
çŸåšãã©ã€ãã©ãªã®ããã©ã«ãã®è©³çŽ°åºŠã¯ãWARNINGãã§ãã
詳现床ã倿Žããã«ã¯ãçŽæ¥èšå®ã¡ãœããã®1ã€ã䜿çšããã ãã§ããäŸãã°ã詳现床ãINFOã¬ãã«ã«å€æŽããæ¹æ³ã¯ä»¥äžã®éãã§ãã
```python
import transformers
transformers.logging.set_verbosity_info()
```
ç°å¢å€æ° `TRANSFORMERS_VERBOSITY` ã䜿çšããŠãããã©ã«ãã®åé·æ§ããªãŒããŒã©ã€ãããããšãã§ããŸããèšå®ã§ããŸã
`debug`ã`info`ã`warning`ã`error`ã`critical` ã®ããããã«å€æŽããŸããäŸãã°ïŒ
```bash
TRANSFORMERS_VERBOSITY=error ./myprogram.py
```
ããã«ãäžéšã®ãèŠåãã¯ç°å¢å€æ°ãèšå®ããããšã§ç¡å¹ã«ã§ããŸãã
`TRANSFORMERS_NO_ADVISORY_WARNINGS` ã *1* ãªã©ã® true å€ã«èšå®ããŸããããã«ãããæ¬¡ã䜿çšããŠãã°ã«èšé²ãããèŠåãç¡å¹ã«ãªããŸãã
[`logger.warning_advice`]ãäŸãã°ïŒ
```bash
TRANSFORMERS_NO_ADVISORY_WARNINGS=1 ./myprogram.py
```
以äžã¯ãç¬èªã®ã¢ãžã¥ãŒã«ãŸãã¯ã¹ã¯ãªããã§ã©ã€ãã©ãªãšåããã¬ãŒã䜿çšããæ¹æ³ã®äŸã§ãã
```python
from transformers.utils import logging
logging.set_verbosity_info()
logger = logging.get_logger("transformers")
logger.info("INFO")
logger.warning("WARN")
```
ãã®ãã®ã³ã° ã¢ãžã¥ãŒã«ã®ãã¹ãŠã®ã¡ãœããã¯ä»¥äžã«ææžåãããŠããŸããäž»ãªã¡ãœããã¯æ¬¡ã®ãšããã§ãã
[`logging.get_verbosity`] ãã¬ãŒã®çŸåšã®åé·ã¬ãã«ãååŸããŸãã
[`logging.set_verbosity`] ã䜿çšããŠãåé·æ§ãéžæããã¬ãã«ã«èšå®ããŸããé çªã«ïŒå°ãªããã®ããïŒ
åé·ããæãåé·ãŸã§)ããããã®ã¬ãã« (æ¬åŒ§å
ã¯å¯Ÿå¿ãã int å€) ã¯æ¬¡ã®ãšããã§ãã
- `transformers.logging.CRITICAL` ãŸã㯠`transformers.logging.FATAL` (int å€ã50): æãå€ããã®ã®ã¿ãã¬ããŒãããŸãã
é倧ãªãšã©ãŒã
- `transformers.logging.ERROR` (int å€ã40): ãšã©ãŒã®ã¿ãå ±åããŸãã
- `transformers.logging.WARNING` ãŸã㯠`transformers.logging.WARN` (int å€ã30): ãšã©ãŒãš
èŠåãããã¯ã©ã€ãã©ãªã§äœ¿çšãããããã©ã«ãã®ã¬ãã«ã§ãã
- `transformers.logging.INFO` (int å€ã20): ãšã©ãŒãèŠåãããã³åºæ¬æ
å ±ãã¬ããŒãããŸãã
- `transformers.logging.DEBUG` (int å€ã10): ãã¹ãŠã®æ
å ±ãã¬ããŒãããŸãã
ããã©ã«ãã§ã¯ãã¢ãã«ã®ããŠã³ããŒãäžã«ãtqdmãé²è¡ç¶æ³ããŒã衚瀺ãããŸãã [`logging.disable_progress_bar`] ããã³ [`logging.enable_progress_bar`] ã䜿çšããŠããã®åäœãæå¶ãŸãã¯æå¶è§£é€ã§ããŸãã
## `logging` vs `warnings`
Python ã«ã¯ãããçµã¿åãããŠäœ¿çšââããã 2 ã€ã®ãã®ã³ã° ã·ã¹ãã ããããŸããäžã§èª¬æãã `logging` ãš `warnings` ã§ãã
ããã«ãããç¹å®ã®ãã±ããå
ã®èŠåãããã«åé¡ã§ããŸã (äŸ: æ©èœãŸãã¯ãã¹ã®`FutureWarning`)
ããã¯ãã§ã«éæšå¥šã«ãªã£ãŠããã`DeprecationWarning`ã¯ä»åŸã®éæšå¥šã瀺ããŸãã
äž¡æ¹ãšã`transformers`ã©ã€ãã©ãªã§äœ¿çšããŸãã `logging`ã®`captureWarning`ã¡ãœãããæŽ»çšããŠé©å¿ãããŠã
ãããã®èŠåã¡ãã»ãŒãžã¯ãäžèšã®åé·èšå®ããŒã«ã«ãã£ãŠç®¡çãããŸãã
ããã¯ã©ã€ãã©ãªã®éçºè
ã«ãšã£ãŠäœãæå³ããŸãã?次ã®ãã¥ãŒãªã¹ãã£ãã¯ãå°éããå¿
èŠããããŸãã
- `warnings`ã¯ãã©ã€ãã©ãªããã³`transformers`ã«äŸåããã©ã€ãã©ãªã®éçºè
ã«åªå
ãããã¹ãã§ãã
- `logging`ã¯ãæ¥åžžã®ãããžã§ã¯ãã§ã©ã€ãã©ãªã䜿çšããã©ã€ãã©ãªã®ãšã³ããŠãŒã¶ãŒã«äœ¿çšããå¿
èŠããããŸãã
以äžã®`captureWarnings`ã¡ãœããã®ãªãã¡ã¬ã³ã¹ãåç
§ããŠãã ããã
[[autodoc]] logging.captureWarnings
## Base setters
[[autodoc]] logging.set_verbosity_error
[[autodoc]] logging.set_verbosity_warning
[[autodoc]] logging.set_verbosity_info
[[autodoc]] logging.set_verbosity_debug
## Other functions
[[autodoc]] logging.get_verbosity
[[autodoc]] logging.set_verbosity
[[autodoc]] logging.get_logger
[[autodoc]] logging.enable_default_handler
[[autodoc]] logging.disable_default_handler
[[autodoc]] logging.enable_explicit_format
[[autodoc]] logging.reset_format
[[autodoc]] logging.enable_progress_bar
[[autodoc]] logging.disable_progress_bar
| 0 |
hf_public_repos/transformers/docs/source/ja | hf_public_repos/transformers/docs/source/ja/main_classes/image_processor.md | <!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Image Processor
ç»åããã»ããµã¯ãããžã§ã³ ã¢ãã«ã®å
¥åç¹åŸŽã®æºåãšãã®åºåã®åŸåŠçãæ
åœããŸããããã«ã¯ããµã€ãºå€æŽãæ£èŠåãPyTorchãTensorFlowãFlaxãNumpy ãã³ãœã«ãžã®å€æãªã©ã®å€æãå«ãŸããŸããããžãããã»ã°ã¡ã³ããŒã·ã§ã³ ãã¹ã¯ã«å€æãããªã©ãã¢ãã«åºæã®åŸåŠçãå«ãŸããå ŽåããããŸãã
## ImageProcessingMixin
[[autodoc]] image_processing_utils.ImageProcessingMixin
- from_pretrained
- save_pretrained
## BatchFeature
[[autodoc]] BatchFeature
## BaseImageProcessor
[[autodoc]] image_processing_utils.BaseImageProcessor
| 0 |
hf_public_repos/transformers/docs/source/ja | hf_public_repos/transformers/docs/source/ja/main_classes/callback.md | <!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# ã³ãŒã«ããã¯æ°
ã³ãŒã«ããã¯ã¯ãPyTorch ã®ãã¬ãŒãã³ã° ã«ãŒãã®åäœãã«ã¹ã¿ãã€ãºã§ãããªããžã§ã¯ãã§ãã
ãã¬ãŒãã³ã° ã«ãŒããæ€æ»ã§ãã [`Trainer`] (ãã®æ©èœã¯ TensorFlow ã«ã¯ãŸã å®è£
ãããŠããŸãã)
ç¶æ
ã確èªã (鲿ã¬ããŒããTensorBoard ãŸãã¯ä»ã® ML ãã©ãããã©ãŒã ãžã®ãã°èšé²ãªã©)ãæ±ºå®ãäžããŸã (åææ®µéãªã©)ã
忢äžïŒã
ã³ãŒã«ããã¯ã¯ãè¿ããã [`TrainerControl`] ãªããžã§ã¯ããé€ãã°ããèªã¿åãå°çšãã®ã³ãŒãéšåã§ãã
ãã¬ãŒãã³ã° ã«ãŒãå
ã§ã¯äœã倿Žã§ããŸããããã¬ãŒãã³ã° ã«ãŒãã®å€æŽãå¿
èŠãªã«ã¹ã¿ãã€ãºã®å Žåã¯ã次ã®ããšãè¡ãå¿
èŠããããŸãã
[`Trainer`] ããµãã¯ã©ã¹åããå¿
èŠãªã¡ãœããããªãŒããŒã©ã€ãããŸã (äŸã«ã€ããŠã¯ã[trainer](trainer) ãåç
§ããŠãã ãã)ã
ããã©ã«ãã§ã¯ã`TrainingArguments.report_to` 㯠`"all"` ã«èšå®ãããŠããããã[`Trainer`] ã¯æ¬¡ã®ã³ãŒã«ããã¯ã䜿çšããŸãã
- [`DefaultFlowCallback`] ã¯ããã°èšé²ãä¿åãè©äŸ¡ã®ããã©ã«ãã®åäœãåŠçããŸãã
- [`PrinterCallback`] ãŸã㯠[`ProgressCallback`] ã§é²è¡ç¶æ³ã衚瀺ãã
ãã° (æåã®ãã°ã¯ã[`TrainingArguments`] ãéã㊠tqdm ãéã¢ã¯ãã£ãåããå Žåã«äœ¿çšãããããã§ãªãå Žåã«äœ¿çšãããŸã)
2çªç®ã§ã)ã
- [`~integrations.TensorBoardCallback`] (PyTorch >= 1.4 ãä»ããŠ) tensorboard ã«ã¢ã¯ã»ã¹ã§ããå Žå
ãŸãã¯ãã³ãœã«ããŒãXïŒã
- [`~integrations.WandbCallback`] [wandb](https://www.wandb.com/) ãã€ã³ã¹ããŒã«ãããŠããå Žåã
- [`~integrations.CometCallback`] [comet_ml](https://www.comet.ml/site/) ãã€ã³ã¹ããŒã«ãããŠããå Žåã
- [mlflow](https://www.mlflow.org/) ãã€ã³ã¹ããŒã«ãããŠããå Žå㯠[`~integrations.MLflowCallback`]ã
- [`~integrations.NeptuneCallback`] [neptune](https://neptune.ai/) ãã€ã³ã¹ããŒã«ãããŠããå Žåã
- [`~integrations.AzureMLCallback`] [azureml-sdk](https://pypi.org/project/azureml-sdk/) ã®å Žå
ã€ã³ã¹ããŒã«ãããŠããŸãã
- [`~integrations.CodeCarbonCallback`] [codecarbon](https://pypi.org/project/codecarbon/) ã®å Žå
ã€ã³ã¹ããŒã«ãããŠããŸãã
- [`~integrations.ClearMLCallback`] [clearml](https://github.com/allegroai/clearml) ãã€ã³ã¹ããŒã«ãããŠããå Žåã
- [`~integrations.DagsHubCallback`] [dagshub](https://dagshub.com/) ãã€ã³ã¹ããŒã«ãããŠããå Žåã
- [`~integrations.FlyteCallback`] [flyte](https://flyte.org/) ãã€ã³ã¹ããŒã«ãããŠããå Žåã
- [`~integrations.DVCLiveCallback`] [dvclive](https://www.dvc.org/doc/dvclive) ãã€ã³ã¹ããŒã«ãããŠããå Žåã
ããã±ãŒãžãã€ã³ã¹ããŒã«ãããŠããããä»éããçµ±åã䜿çšããããªãå Žåã¯ã`TrainingArguments.report_to` ãã䜿çšãããçµ±åã®ã¿ã®ãªã¹ãã«å€æŽã§ããŸã (äŸ: `["azure_ml", "wandb"]`) ã
ã³ãŒã«ããã¯ãå®è£
ããã¡ã€ã³ã¯ã©ã¹ã¯ [`TrainerCallback`] ã§ããããã¯ã
[`TrainingArguments`] 㯠[`Trainer`] ãã€ã³ã¹ã¿ã³ã¹åããããã«äœ¿çšãããããã«ã¢ã¯ã»ã¹ã§ããŸãã
[`TrainerState`] ãä»ããŠãã¬ãŒããŒã®å
éšç¶æ
ãååŸãããã¬ãŒãã³ã° ã«ãŒãäžã§ããã€ãã®ã¢ã¯ã·ã§ã³ãå®è¡ã§ããŸãã
[`TrainerControl`]ã
## å©çšå¯èœãªã³ãŒã«ããã¯
ã©ã€ãã©ãªã§å©çšå¯èœãª [`TrainerCallback`] ã®ãªã¹ãã¯æ¬¡ã®ãšããã§ãã
[[autodoc]] integrations.CometCallback
- setup
[[autodoc]] DefaultFlowCallback
[[autodoc]] PrinterCallback
[[autodoc]] ProgressCallback
[[autodoc]] EarlyStoppingCallback
[[autodoc]] integrations.TensorBoardCallback
[[autodoc]] integrations.WandbCallback
- setup
[[autodoc]] integrations.MLflowCallback
- setup
[[autodoc]] integrations.AzureMLCallback
[[autodoc]] integrations.CodeCarbonCallback
[[autodoc]] integrations.NeptuneCallback
[[autodoc]] integrations.ClearMLCallback
[[autodoc]] integrations.DagsHubCallback
[[autodoc]] integrations.FlyteCallback
[[autodoc]] integrations.DVCLiveCallback
- setup
## TrainerCallback
[[autodoc]] TrainerCallback
以äžã¯ãã«ã¹ã¿ã ã³ãŒã«ããã¯ã PyTorch [`Trainer`] ã«ç»é²ããæ¹æ³ã®äŸã§ãã
```python
class MyCallback(TrainerCallback):
"A callback that prints a message at the beginning of training"
def on_train_begin(self, args, state, control, **kwargs):
print("Starting training")
trainer = Trainer(
model,
args,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
callbacks=[MyCallback], # We can either pass the callback class this way or an instance of it (MyCallback())
)
```
ã³ãŒã«ããã¯ãç»é²ããå¥ã®æ¹æ³ã¯ã次ã®ããã« `trainer.add_callback()` ãåŒã³åºãããšã§ãã
```python
trainer = Trainer(...)
trainer.add_callback(MyCallback)
# Alternatively, we can pass an instance of the callback class
trainer.add_callback(MyCallback())
```
## TrainerState
[[autodoc]] TrainerState
## TrainerControl
[[autodoc]] TrainerControl
| 0 |
hf_public_repos/transformers/docs/source/ja | hf_public_repos/transformers/docs/source/ja/main_classes/quantization.md | <!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Quantize ð€ Transformers models
## `AutoGPTQ` Integration
ð€ Transformers ã«ã¯ãèšèªã¢ãã«ã§ GPTQ éååãå®è¡ããããã® `optimum` API ãçµ±åãããŠããŸããããã©ãŒãã³ã¹ã倧å¹
ã«äœäžãããããšãªããæšè«é床ãé«éåããããšãªããã¢ãã«ã 8ã4ã3ãããã«ã¯ 2 ãããã§ããŒãããã³éååã§ããŸããããã¯ãã»ãšãã©ã® GPU ããŒããŠã§ã¢ã§ãµããŒããããŠããŸãã
éååã¢ãã«ã®è©³çްã«ã€ããŠã¯ã以äžã確èªããŠãã ããã
- [GPTQ](https://arxiv.org/pdf/2210.17323.pdf) è«æ
- GPTQ éååã«é¢ãã `optimum` [ã¬ã€ã](https://huggingface.co/docs/optimum/llm_quantization/usage_guides/quantization)
- ããã¯ãšã³ããšããŠäœ¿çšããã [`AutoGPTQ`](https://github.com/PanQiWei/AutoGPTQ) ã©ã€ãã©ãª
### Requirements
以äžã®ã³ãŒããå®è¡ããã«ã¯ã以äžã®èŠä»¶ãã€ã³ã¹ããŒã«ãããŠããå¿
èŠããããŸãïŒ
- ææ°ã® `AutoGPTQ` ã©ã€ãã©ãªãã€ã³ã¹ããŒã«ããã
`pip install auto-gptq` ãã€ã³ã¹ããŒã«ããã
- ææ°ã® `optimum` ããœãŒã¹ããã€ã³ã¹ããŒã«ããã
`git+https://github.com/huggingface/optimum.git` ãã€ã³ã¹ããŒã«ããã
- ææ°ã® `transformers` ããœãŒã¹ããã€ã³ã¹ããŒã«ããã
ææ°ã® `transformers` ããœãŒã¹ããã€ã³ã¹ããŒã«ãã `pip install git+https://github.com/huggingface/transformers.git`
- ææ°ã® `accelerate` ã©ã€ãã©ãªãã€ã³ã¹ããŒã«ããã
`pip install --upgrade accelerate` ãå®è¡ããã
GPTQçµ±åã¯ä»ã®ãšããããã¹ãã¢ãã«ã®ã¿ããµããŒãããŠããã®ã§ãèŠèŠãé³å£°ããã«ãã¢ãŒãã«ã¢ãã«ã§ã¯äºæãã¬æåã«ééãããããããªãããšã«æ³šæããŠãã ããã
### Load and quantize a model
GPTQ ã¯ãéååã¢ãã«ã䜿çšããåã«éã¿ã®ãã£ãªãã¬ãŒã·ã§ã³ãå¿
èŠãšããéååæ¹æ³ã§ãããã©ã³ã¹ãã©ãŒã㌠ã¢ãã«ãæåããéååããå Žåã¯ãéååã¢ãã«ãäœæãããŸã§ã«æéããããããšããããŸã (`facebook/opt-350m`ã¢ãã«ã® Google colab ã§ã¯çŽ 5 å)ã
ãããã£ãŠãGPTQ éååã¢ãã«ã䜿çšããã·ããªãªã¯ 2 ã€ãããŸããæåã®äœ¿çšäŸã¯ãããã§å©çšå¯èœãªä»ã®ãŠãŒã¶ãŒã«ãã£ãŠãã§ã«éååãããã¢ãã«ãããŒãããããšã§ãã2 çªç®ã®äœ¿çšäŸã¯ãã¢ãã«ãæåããéååããä¿åãããããã«ããã·ã¥ããŠãä»ã®ãŠãŒã¶ãŒã䜿çšã§ããããã«ããããšã§ããããã䜿ã£ãŠãã ããã
#### GPTQ Configuration
ã¢ãã«ãããŒãããŠéååããã«ã¯ã[`GPTQConfig`] ãäœæããå¿
èŠããããŸããããŒã¿ã»ãããæºåããã«ã¯ã`bits`ã®æ°ãéååã調æŽããããã®`dataset`ãããã³ã¢ãã«ã®`Tokenizer`ãæž¡ãå¿
èŠããããŸãã
```python
model_id = "facebook/opt-125m"
tokenizer = AutoTokenizer.from_pretrained(model_id)
gptq_config = GPTQConfig(bits=4, dataset = "c4", tokenizer=tokenizer)
```
ç¬èªã®ããŒã¿ã»ãããæååã®ãªã¹ããšããŠæž¡ãããšãã§ããããšã«æ³šæããŠãã ããããã ããGPTQ è«æã®ããŒã¿ã»ããã䜿çšããããšã匷ããå§ãããŸãã
```python
dataset = ["auto-gptq is an easy-to-use model quantization library with user-friendly apis, based on GPTQ algorithm."]
quantization = GPTQConfig(bits=4, dataset = dataset, tokenizer=tokenizer)
```
#### Quantization
`from_pretrained` ã䜿çšãã`quantization_config` ãèšå®ããããšã§ã¢ãã«ãéååã§ããŸãã
```python
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=gptq_config)
```
ã¢ãã«ãéååããã«ã¯ GPU ãå¿
èŠã§ããããšã«æ³šæããŠãã ãããã¢ãã«ã CPU ã«é
眮ããéååããããã«ã¢ãžã¥ãŒã«ã GPU ã«ååŸã«ç§»åãããŸãã
CPU ãªãããŒãã®äœ¿çšäžã« GPU ã®äœ¿çšéãæå€§åãããå Žåã¯ã`device_map = "auto"` ãèšå®ã§ããŸãã
```python
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", quantization_config=gptq_config)
```
ãã£ã¹ã¯ ãªãããŒãã¯ãµããŒããããŠããªãããšã«æ³šæããŠãã ãããããã«ãããŒã¿ã»ãããåå ã§ã¡ã¢ãªãäžè¶³ããŠããå Žåã¯ã`from_pretained` ã§ `max_memory` ãæž¡ãå¿
èŠãããå ŽåããããŸãã `device_map`ãš`max_memory`ã®è©³çްã«ã€ããŠã¯ããã® [ã¬ã€ã](https://huggingface.co/docs/accelerate/usage_guides/big_modeling#designing-a-device-map) ãåç
§ããŠãã ããã
<Tip warning={true}>
GPTQ éååã¯ãçŸæç¹ã§ã¯ããã¹ã ã¢ãã«ã§ã®ã¿æ©èœããŸããããã«ãéååããã»ã¹ã¯ããŒããŠã§ã¢ã«ãã£ãŠã¯é·æéãããå ŽåããããŸã (NVIDIA A100 ã䜿çšããå Žåã175B ã¢ãã« = 4 gpu æé)ãã¢ãã«ã® GPTQ éååããŒãžã§ã³ãååšããªãå Žåã¯ãããã§ç¢ºèªããŠãã ãããããã§ãªãå Žåã¯ãgithub ã§èŠæ±ãéä¿¡ã§ããŸãã
</Tip>
### Push quantized model to ð€ Hub
ä»ã® ð€ ã¢ãã«ãšåæ§ã«ã`push_to_hub` ã䜿çšããŠéååã¢ãã«ãããã«ããã·ã¥ã§ããŸããéååæ§æã¯ä¿åãããã¢ãã«ã«æ²¿ã£ãŠããã·ã¥ãããŸãã
```python
quantized_model.push_to_hub("opt-125m-gptq")
tokenizer.push_to_hub("opt-125m-gptq")
```
éååãããã¢ãã«ãããŒã«ã« ãã·ã³ã«ä¿åãããå Žåã¯ã`save_pretrained` ã䜿çšããŠè¡ãããšãã§ããŸãã
```python
quantized_model.save_pretrained("opt-125m-gptq")
tokenizer.save_pretrained("opt-125m-gptq")
```
`device_map` ã䜿çšããŠã¢ãã«ãéååããå Žåã¯ãä¿åããåã«ã¢ãã«å
šäœã GPU ãŸã㯠`cpu` ã®ããããã«ç§»åããŠãã ããã
```python
quantized_model.to("cpu")
quantized_model.save_pretrained("opt-125m-gptq")
```
### Load a quantized model from the ð€ Hub
`from_pretrained`ã䜿çšããŠãéååãããã¢ãã«ãããããããŒãã§ããŸãã
屿§ `quantization_config` ãã¢ãã«èšå®ãªããžã§ã¯ãã«ååšããããšã確èªããŠãããã·ã¥ãããéã¿ãéååãããŠããããšã確èªããŸãã
```python
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("{your_username}/opt-125m-gptq")
```
å¿
èŠä»¥äžã®ã¡ã¢ãªãå²ãåœãŠãã«ã¢ãã«ãããéãããŒããããå Žåã¯ã`device_map` åŒæ°ã¯éååã¢ãã«ã§ãæ©èœããŸãã `accelerate`ã©ã€ãã©ãªãã€ã³ã¹ããŒã«ãããŠããããšã確èªããŠãã ããã
```python
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("{your_username}/opt-125m-gptq", device_map="auto")
```
### Exllama kernels for faster inference
4 ããã ã¢ãã«ã®å Žåãæšè«é床ãé«ããããã« exllama ã«ãŒãã«ã䜿çšã§ããŸããããã©ã«ãã§æå¹ã«ãªã£ãŠããŸãã [`GPTQConfig`] ã§ `disable_exllama` ãæž¡ãããšã§ããã®åäœã倿Žã§ããŸããããã«ãããèšå®ã«ä¿åãããŠããéååèšå®ãäžæžããããŸããã«ãŒãã«ã«é¢é£ãã屿§ã®ã¿ãäžæžãã§ããããšã«æ³šæããŠãã ãããããã«ãexllama ã«ãŒãã«ã䜿çšãããå Žåã¯ãã¢ãã«å
šäœã GPU äžã«çœ®ãå¿
èŠããããŸãã
```py
import torch
gptq_config = GPTQConfig(bits=4, disable_exllama=False)
model = AutoModelForCausalLM.from_pretrained("{your_username}/opt-125m-gptq", device_map="auto", quantization_config = gptq_config)
```
çŸæç¹ã§ã¯ 4 ããã ã¢ãã«ã®ã¿ããµããŒããããŠããããšã«æ³šæããŠãã ãããããã«ãpeft ã䜿çšããŠéååã¢ãã«ã埮調æŽããŠããå Žåã¯ãexllama ã«ãŒãã«ãéã¢ã¯ãã£ãåããããšããå§ãããŸãã
#### Fine-tune a quantized model
Hugging Face ãšã³ã·ã¹ãã ã®ã¢ããã¿ãŒã®å
¬åŒãµããŒãã«ãããGPTQ ã§éååãããã¢ãã«ã埮調æŽã§ããŸãã
詳现ã«ã€ããŠã¯ã[`peft`](https://github.com/huggingface/peft) ã©ã€ãã©ãªãã芧ãã ããã
### Example demo
GPTQ ã䜿çšããŠã¢ãã«ãéååããæ¹æ³ãšãpeft ã䜿çšããŠéååãããã¢ãã«ã埮調æŽããæ¹æ³ã«ã€ããŠã¯ãGoogle Colab [ããŒãããã¯](https://colab.research.google.com/drive/1_TIrmuKOFhuRRiTWN94iLKUFu6ZX4ceb?usp=sharing) ãåç
§ããŠãã ããã
### GPTQConfig
[[autodoc]] GPTQConfig
## `bitsandbytes` Integration
ð€ Transformers ã¯ã`bitsandbytes` ã§æããã䜿çšãããã¢ãžã¥ãŒã«ãšç·å¯ã«çµ±åãããŠããŸããæ°è¡ã®ã³ãŒãã§ã¢ãã«ã 8 ããã粟床ã§ããŒãã§ããŸãã
ããã¯ã`bitsandbytes`ã® `0.37.0`ãªãªãŒã¹ä»¥éãã»ãšãã©ã® GPU ããŒããŠã§ã¢ã§ãµããŒããããŠããŸãã
éååæ¹æ³ã®è©³çްã«ã€ããŠã¯ã[LLM.int8()](https://arxiv.org/abs/2208.07339) è«æããŸã㯠[ããã°æçš¿](https://huggingface.co/blog/hf-bitsandbytes-) ãã芧ãã ãããçµ±åïŒã³ã©ãã¬ãŒã·ã§ã³ã«ã€ããŠã
`0.39.0`ãªãªãŒã¹ä»¥éãFP4 ããŒã¿åãæŽ»çšãã4 ãããéååã䜿çšããŠ`device_map`ããµããŒãããä»»æã®ã¢ãã«ãããŒãã§ããŸãã
ç¬èªã® pytorch ã¢ãã«ãéååãããå Žåã¯ãð€ Accelerate ã©ã€ãã©ãªã® [ããã¥ã¡ã³ã](https://huggingface.co/docs/accelerate/main/en/usage_guides/quantization) ããã§ãã¯ããŠãã ããã
`bitsandbytes`çµ±åã䜿çšããŠã§ããããšã¯æ¬¡ã®ãšããã§ã
### General usage
ã¢ãã«ã ð€ Accelerate ã«ããèªã¿èŸŒã¿ããµããŒããã`torch.nn.Linear` ã¬ã€ã€ãŒãå«ãŸããŠããéãã [`~PreTrainedModel.from_pretrained`] ã¡ãœãããåŒã³åºããšãã« `load_in_8bit` ãŸã㯠`load_in_4bit` åŒæ°ã䜿çšããŠã¢ãã«ãéååã§ããŸããããã¯ã©ã®ãããªã¢ããªãã£ã§ãåæ§ã«æ©èœããã¯ãã§ãã
```python
from transformers import AutoModelForCausalLM
model_8bit = AutoModelForCausalLM.from_pretrained("facebook/opt-350m", load_in_8bit=True)
model_4bit = AutoModelForCausalLM.from_pretrained("facebook/opt-350m", load_in_4bit=True)
```
ããã©ã«ãã§ã¯ãä»ã®ãã¹ãŠã®ã¢ãžã¥ãŒã« (äŸ: `torch.nn.LayerNorm`) 㯠`torch.float16` ã«å€æãããŸããããã® `dtype` ã倿Žãããå Žåã¯ã`torch_dtype` åŒæ°ãäžæžãã§ããŸãã
```python
>>> import torch
>>> from transformers import AutoModelForCausalLM
>>> model_8bit = AutoModelForCausalLM.from_pretrained("facebook/opt-350m", load_in_8bit=True, torch_dtype=torch.float32)
>>> model_8bit.model.decoder.layers[-1].final_layer_norm.weight.dtype
torch.float32
```
### FP4 quantization
#### Requirements
以äžã®ã³ãŒã ã¹ãããããå®è¡ããåã«ã以äžã®èŠä»¶ãã€ã³ã¹ããŒã«ãããŠããããšã確èªããŠãã ããã
- ææ°ã®`bitsandbytes`ã©ã€ãã©ãª
`pip install bitsandbytes>=0.39.0`
- ææ°ã®`accelerate`ãã€ã³ã¹ããŒã«ãã
`pip install --upgrade accelerate`
- ææ°ã® `transformers` ãã€ã³ã¹ããŒã«ãã
`pip install --upgrade transformers`
#### Tips and best practices
- **é«åºŠãªäœ¿çšæ³:** å¯èœãªãã¹ãŠã®ãªãã·ã§ã³ã䜿çšãã 4 ãããéååã®é«åºŠãªäœ¿çšæ³ã«ã€ããŠã¯ã[ãã® Google Colab ããŒãããã¯](https://colab.research.google.com/drive/1ge2F1QSK8Q7h0hn3YKuBCOAS0bK8E0wf) ãåç
§ããŠãã ããã
- **`batch_size=1` ã«ããé«éæšè« :** bitsandbytes ã® `0.40.0` ãªãªãŒã¹ä»¥éã`batch_size=1` ã§ã¯é«éæšè«ã®æ©æµãåããããšãã§ããŸãã [ãããã®ãªãªãŒã¹ ããŒã](https://github.com/TimDettmers/bitsandbytes/releases/tag/0.40.0) ã確èªãããã®æ©èœã掻çšããã«ã¯`0.40.0`以éã®ããŒãžã§ã³ã䜿çšããŠããããšã確èªããŠãã ãããç®±ã®ã
- **ãã¬ãŒãã³ã°:** [QLoRA è«æ](https://arxiv.org/abs/2305.14314) ã«ãããšã4 ãããåºæ¬ã¢ãã«ããã¬ãŒãã³ã°ããå Žå (äŸ: LoRA ã¢ããã¿ãŒã䜿çš)ã`bnb_4bit_quant_type='nf4'` ã䜿çšããå¿
èŠããããŸãã ã
- **æšè«:** æšè«ã®å Žåã`bnb_4bit_quant_type` ã¯ããã©ãŒãã³ã¹ã«å€§ããªåœ±é¿ãäžããŸããããã ããã¢ãã«ã®éã¿ãšã®äžè²«æ§ãä¿ã€ããã«ãå¿
ãåã `bnb_4bit_compute_dtype` ããã³ `torch_dtype` åŒæ°ã䜿çšããŠãã ããã
#### Load a large model in 4bit
`.from_pretrained` ã¡ãœãããåŒã³åºããšãã« `load_in_4bit=True` ã䜿çšãããšãã¡ã¢ãªäœ¿çšéã (ãããã) 4 ã§å²ãããšãã§ããŸãã
```python
# pip install transformers accelerate bitsandbytes
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "bigscience/bloom-1b7"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", load_in_4bit=True)
```
<Tip warning={true}>
ã¢ãã«ã 4 ãããã§ããŒãããããšãçŸæç¹ã§ã¯éååãããéã¿ãããã«ããã·ã¥ããããšã¯ã§ããªãããšã«æ³šæããŠãã ããã 4 ãããã®éã¿ã¯ãŸã ãµããŒããããŠããªãããããã¬ãŒãã³ã°ã§ããªãããšã«ã泚æããŠãã ããããã ãã4 ããã ã¢ãã«ã䜿çšããŠè¿œå ã®ãã©ã¡ãŒã¿ãŒããã¬ãŒãã³ã°ããããšãã§ããŸããããã«ã€ããŠã¯æ¬¡ã®ã»ã¯ã·ã§ã³ã§èª¬æããŸãã
</Tip>
### Load a large model in 8bit
`.from_pretrained` ã¡ãœãããåŒã³åºããšãã« `load_in_8bit=True` åŒæ°ã䜿çšãããšãã¡ã¢ãªèŠä»¶ããããååã«ããŠã¢ãã«ãããŒãã§ããŸãã
```python
# pip install transformers accelerate bitsandbytes
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "bigscience/bloom-1b7"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", load_in_8bit=True)
```
次ã«ãéåžž [`PreTrainedModel`] ã䜿çšããã®ãšåãããã«ã¢ãã«ã䜿çšããŸãã
`get_memory_footprint` ã¡ãœããã䜿çšããŠãã¢ãã«ã®ã¡ã¢ãª ãããããªã³ãã確èªã§ããŸãã
```python
print(model.get_memory_footprint())
```
ãã®çµ±åã«ããã倧ããªã¢ãã«ãå°ããªããã€ã¹ã«ããŒãããåé¡ãªãå®è¡ã§ããããã«ãªããŸããã
<Tip warning={true}>
ã¢ãã«ã 8 ãããã§ããŒãããããšãææ°ã® `transformers`ãš`bitsandbytes`ã䜿çšããå Žåãé€ããéååãããéã¿ãããã«ããã·ã¥ããããšã¯çŸåšäžå¯èœã§ããããšã«æ³šæããŠãã ããã 8 ãããã®éã¿ã¯ãŸã ãµããŒããããŠããªãããããã¬ãŒãã³ã°ã§ããªãããšã«ã泚æããŠãã ããããã ãã8 ããã ã¢ãã«ã䜿çšããŠè¿œå ã®ãã©ã¡ãŒã¿ãŒããã¬ãŒãã³ã°ããããšãã§ããŸããããã«ã€ããŠã¯æ¬¡ã®ã»ã¯ã·ã§ã³ã§èª¬æããŸãã
ãŸãã`device_map` ã¯ãªãã·ã§ã³ã§ãããå©çšå¯èœãªãªãœãŒã¹äžã§ã¢ãã«ãå¹ççã«ãã£ã¹ããããããããæšè«ã«ã¯ `device_map = 'auto'` ãèšå®ããããšãæšå¥šãããŸãã
</Tip>
#### Advanced use cases
ããã§ã¯ãFP4 éååã䜿çšããŠå®è¡ã§ããããã€ãã®é«åºŠãªäœ¿çšäŸã«ã€ããŠèª¬æããŸãã
##### Change the compute dtype
compute dtype ã¯ãèšç®äžã«äœ¿çšããã dtype ã倿Žããããã«äœ¿çšãããŸããããšãã°ãé ãç¶æ
ã¯`float32`ã«ãããŸãããé«éåã®ããã«èšç®ã bf16 ã«èšå®ã§ããŸããããã©ã«ãã§ã¯ãcompute dtype 㯠`float32` ã«èšå®ãããŸãã
```python
import torch
from transformers import BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_4bit=True, bnb_4bit_compute_dtype=torch.bfloat16)
```
##### Using NF4 (Normal Float 4) data type
NF4 ããŒã¿åã䜿çšããããšãã§ããŸããããã¯ãæ£èŠååžã䜿çšããŠåæåãããéã¿ã«é©åããæ°ãã 4 ããã ããŒã¿åã§ãããã®å®è¡ã®ããã«:
```python
from transformers import BitsAndBytesConfig
nf4_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
)
model_nf4 = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=nf4_config)
```
##### Use nested quantization for more memory efficient inference
ãŸãããã¹ããããéååææ³ã䜿çšããããšããå§ãããŸããããã«ãããããã©ãŒãã³ã¹ã远å ããããšãªããããå€ãã®ã¡ã¢ãªãç¯çŽãããŸããçµéšçãªèгå¯ãããããã«ãããNVIDIA-T4 16GB äžã§ã·ãŒã±ã³ã¹é· 1024ãããã ãµã€ãº 1ãåŸé
环ç©ã¹ããã 4 ã® llama-13b ã¢ãã«ã埮調æŽããããšãå¯èœã«ãªããŸãã
```python
from transformers import BitsAndBytesConfig
double_quant_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
)
model_double_quant = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=double_quant_config)
```
### Push quantized models on the ð€ Hub
`push_to_hub`ã¡ãœãããåçŽã«äœ¿çšããããšã§ãéååãããã¢ãã«ãããã«ããã·ã¥ã§ããŸããããã«ãããæåã«éååæ§æãã¡ã€ã«ãããã·ã¥ãããæ¬¡ã«éååãããã¢ãã«ã®éã¿ãããã·ã¥ãããŸãã
ãã®æ©èœã䜿çšã§ããããã«ããã«ã¯ãå¿
ã `bitsandbytes>0.37.2` ã䜿çšããŠãã ãã (ãã®èšäºã®å·çæç¹ã§ã¯ã`bitsandbytes==0.38.0.post1` ã§ãã¹ãããŸãã)ã
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("bigscience/bloom-560m", device_map="auto", load_in_8bit=True)
tokenizer = AutoTokenizer.from_pretrained("bigscience/bloom-560m")
model.push_to_hub("bloom-560m-8bit")
```
<Tip warning={true}>
å€§èŠæš¡ãªã¢ãã«ã§ã¯ãããäžã§ 8 ããã ã¢ãã«ãããã·ã¥ããããšãåŒ·ãæšå¥šãããŸããããã«ãããã³ãã¥ããã£ã¯ã¡ã¢ãª ãããããªã³ãã®åæžãšãããšãã° Google Colab ã§ã®å€§èŠæš¡ãªã¢ãã«ã®èªã¿èŸŒã¿ã«ããæ©æµãåããããšãã§ããŸãã
</Tip>
### Load a quantized model from the ð€ Hub
`from_pretrained`ã¡ãœããã䜿çšããŠãããããéååã¢ãã«ãããŒãã§ããŸãã屿§ `quantization_config` ãã¢ãã«èšå®ãªããžã§ã¯ãã«ååšããããšã確èªããŠãããã·ã¥ãããéã¿ãéååãããŠããããšã確èªããŸãã
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("{your_username}/bloom-560m-8bit", device_map="auto")
```
ãã®å ŽåãåŒæ° `load_in_8bit=True` ãæå®ããå¿
èŠã¯ãããŸãããã`bitsandbytes` ãš `accelerate` ãã€ã³ã¹ããŒã«ãããŠããããšã確èªããå¿
èŠãããããšã«æ³šæããŠãã ããã
ãŸãã`device_map` ã¯ãªãã·ã§ã³ã§ãããå©çšå¯èœãªãªãœãŒã¹äžã§ã¢ãã«ãå¹ççã«ãã£ã¹ããããããããæšè«ã«ã¯ `device_map = 'auto'` ãèšå®ããããšãæšå¥šãããŸãã
### Advanced use cases
ãã®ã»ã¯ã·ã§ã³ã¯ã8 ããã ã¢ãã«ã®ããŒããšå®è¡ä»¥å€ã«äœãã§ããããæ¢æ±ãããäžçŽãŠãŒã¶ãŒã察象ãšããŠããŸãã
#### Offload between `cpu` and `gpu`
ãã®é«åºŠãªäœ¿çšäŸã® 1 ã€ã¯ãã¢ãã«ãããŒããã`CPU`ãš`GPU`ã®éã§éã¿ããã£ã¹ãããã§ããããšã§ãã CPU äžã§ãã£ã¹ããããããéã¿ã¯ **8 ãããã«å€æãããªã**ããã`float32`ã«ä¿æãããããšã«æ³šæããŠãã ããããã®æ©èœã¯ãéåžžã«å€§èŠæš¡ãªã¢ãã«ãé©åããããã®ã¢ãã«ã GPU ãš CPU ã®éã§ãã£ã¹ãããããããŠãŒã¶ãŒã察象ãšããŠããŸãã
ãŸãã`transformers` ãã [`BitsAndBytesConfig`] ãããŒããã屿§ `llm_int8_enable_fp32_cpu_offload` ã `True` ã«èšå®ããŸãã
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(llm_int8_enable_fp32_cpu_offload=True)
```
`bigscience/bloom-1b7`ã¢ãã«ãããŒãããå¿
èŠãããã`lm_head`ãé€ãã¢ãã«å
šäœã«ââé©åããã®ã«åå㪠GPU RAM ããããšããŸãããããã£ãŠã次ã®ããã«ã«ã¹ã¿ã device_map ãäœæããŸãã
```python
device_map = {
"transformer.word_embeddings": 0,
"transformer.word_embeddings_layernorm": 0,
"lm_head": "cpu",
"transformer.h": 0,
"transformer.ln_f": 0,
}
```
ãããŠã次ã®ããã«ã¢ãã«ãããŒãããŸãã
```python
model_8bit = AutoModelForCausalLM.from_pretrained(
"bigscience/bloom-1b7",
device_map=device_map,
quantization_config=quantization_config,
)
```
以äžã§ãïŒã¢ãã«ã楜ããã§ãã ããïŒ
#### Play with `llm_int8_threshold`
`llm_int8_threshold` åŒæ°ãæäœããŠãå€ãå€ã®ãããå€ã倿Žã§ããŸãã å€ãå€ ãšã¯ãç¹å®ã®ãããå€ãã倧ããé ããç¶æ
ã®å€ã§ãã
ããã¯ã`LLM.int8()`è«æã§èª¬æãããŠããå€ã倿€åºã®å€ãå€ãããå€ã«å¯Ÿå¿ããŸãããã®ãããå€ãè¶
ããé ãç¶æ
ã®å€ã¯å€ãå€ãšã¿ãªããããããã®å€ã«å¯Ÿããæäœã¯ fp16 ã§å®è¡ãããŸããéåžžãå€ã¯æ£èŠååžããŸããã€ãŸããã»ãšãã©ã®å€ã¯ [-3.5, 3.5] ã®ç¯å²å
ã«ãããŸãããå€§èŠæš¡ãªã¢ãã«ã§ã¯å€§ããç°ãªãååžã瀺ãäŸå€çãªç³»çµ±çå€ãå€ãããã€ããããŸãããããã®å€ãå€ã¯ãå€ãã®å Žå [-60, -6] ãŸã㯠[6, 60] ã®ç¯å²å
ã«ãããŸãã Int8 éååã¯ã倧ããã 5 çšåºŠãŸã§ã®å€ã§ã¯ããŸãæ©èœããŸããããããè¶
ãããšãããã©ãŒãã³ã¹ã倧å¹
ã«äœäžããŸããé©åãªããã©ã«ãã®ãããå€ã¯ 6 ã§ãããããäžå®å®ãªã¢ãã« (å°èŠæš¡ãªã¢ãã«ã埮調æŽ) ã§ã¯ãããäœããããå€ãå¿
èŠã«ãªãå ŽåããããŸãã
ãã®åŒæ°ã¯ãã¢ãã«ã®æšè«é床ã«åœ±é¿ãäžããå¯èœæ§ããããŸãããã®ãã©ã¡ãŒã¿ã詊ããŠã¿ãŠããŠãŒã¹ã±ãŒã¹ã«æé©ãªãã©ã¡ãŒã¿ãèŠã€ããããšããå§ãããŸãã
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
model_id = "bigscience/bloom-1b7"
quantization_config = BitsAndBytesConfig(
llm_int8_threshold=10,
)
model_8bit = AutoModelForCausalLM.from_pretrained(
model_id,
device_map=device_map,
quantization_config=quantization_config,
)
tokenizer = AutoTokenizer.from_pretrained(model_id)
```
#### Skip the conversion of some modules
äžéšã®ã¢ãã«ã«ã¯ãå®å®æ§ã確ä¿ããããã« 8 ãããã«å€æããå¿
èŠããªãã¢ãžã¥ãŒã«ãããã€ããããŸããããšãã°ããžã¥ãŒã¯ããã¯ã¹ ã¢ãã«ã«ã¯ãã¹ãããããå¿
èŠãããããã€ãã® `lm_head` ã¢ãžã¥ãŒã«ããããŸãã `llm_int8_skip_modules` ã§éãã§ã¿ã
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
model_id = "bigscience/bloom-1b7"
quantization_config = BitsAndBytesConfig(
llm_int8_skip_modules=["lm_head"],
)
model_8bit = AutoModelForCausalLM.from_pretrained(
model_id,
device_map=device_map,
quantization_config=quantization_config,
)
tokenizer = AutoTokenizer.from_pretrained(model_id)
```
#### Fine-tune a model that has been loaded in 8-bit
Hugging Face ãšã³ã·ã¹ãã ã®ã¢ããã¿ãŒã®å
¬åŒãµããŒãã«ããã8 ãããã§ããŒããããã¢ãã«ã埮調æŽã§ããŸãã
ããã«ãããåäžã® Google Colab ã§`flan-t5-large`ã`facebook/opt-6.7b`ãªã©ã®å€§èŠæš¡ã¢ãã«ã埮調æŽããããšãã§ããŸãã詳现ã«ã€ããŠã¯ã[`peft`](https://github.com/huggingface/peft) ã©ã€ãã©ãªãã芧ãã ããã
ãã¬ãŒãã³ã°çšã®ã¢ãã«ãããŒããããšãã« `device_map` ãæž¡ãå¿
èŠããªãããšã«æ³šæããŠãã ãããã¢ãã«ã GPU ã«èªåçã«ããŒããããŸããå¿
èŠã«å¿ããŠãããã€ã¹ ããããç¹å®ã®ããã€ã¹ã«èšå®ããããšãã§ããŸã (äŸ: `cuda:0`ã`0`ã`torch.device('cuda:0')`)ã `device_map=auto`ã¯æšè«ã®ã¿ã«äœ¿çšããå¿
èŠãããããšã«æ³šæããŠãã ããã
### BitsAndBytesConfig
[[autodoc]] BitsAndBytesConfig
## Quantization with ð€ `optimum`
`optimum`ã§ãµããŒããããŠããéååæ¹æ³ã®è©³çްã«ã€ããŠã¯ã[Optimum ããã¥ã¡ã³ã](https://huggingface.co/docs/optimum/index) ãåç
§ããããããèªåã®ãŠãŒã¹ã±ãŒã¹ã«é©çšã§ãããã©ããã確èªããŠãã ããã
| 0 |
hf_public_repos/transformers/docs/source/ja | hf_public_repos/transformers/docs/source/ja/main_classes/onnx.md | <!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Exporting ð€ Transformers models to ONNX
ð€ Transformers 㯠`transformers.onnx` ããã±ãŒãžãæäŸããŸãã
èšå®ãªããžã§ã¯ããå©çšããããšã§ãã¢ãã«ã®ãã§ãã¯ãã€ã³ããONNXã°ã©ãã«å€æããããšãã§ããŸãã
詳现ã¯[ã¬ã€ã](../serialization) ãåç
§ããŠãã ããã
ãåç
§ããŠãã ããã
## ONNX Configurations
以äžã®3ã€ã®æœè±¡ã¯ã©ã¹ãæäŸããŠããŸãã
ãšã¯ã¹ããŒããããã¢ãã«ã¢ãŒããã¯ãã£ã®ã¿ã€ãã«å¿ããŠãç¶æ¿ãã¹ã3ã€ã®æœè±¡ã¯ã©ã¹ãæäŸããŸãïŒ
* ãšã³ã³ãŒããŒããŒã¹ã®ã¢ãã«ã¯ [`~onnx.config.OnnxConfig`] ãç¶æ¿ããŸãã
* ãã³ãŒããŒããŒã¹ã®ã¢ãã«ã¯ [`~onnx.config.OnnxConfigWithPast`] ãç¶æ¿ããŸãã
* ãšã³ã³ãŒããŒã»ãã³ãŒããŒã¢ãã«ã¯ [`~onnx.config.OnnxSeq2SeqConfigWithPast`] ãç¶æ¿ããŠããŸãã
### OnnxConfig
[[autodoc]] onnx.config.OnnxConfig
### OnnxConfigWithPast
[[autodoc]] onnx.config.OnnxConfigWithPast
### OnnxSeq2SeqConfigWithPast
[[autodoc]] onnx.config.OnnxSeq2SeqConfigWithPast
## ONNX Features
å ONNX æ§æã¯ã次ã®ããšãå¯èœã«ããäžé£ã® _æ©èœ_ ã«é¢é£ä»ããããŠããŸãã
ããŸããŸãªã¿ã€ãã®ããããžãŸãã¯ã¿ã¹ã¯ã®ã¢ãã«ããšã¯ã¹ããŒãããŸãã
### FeaturesManager
[[autodoc]] onnx.features.FeaturesManager
| 0 |
hf_public_repos/transformers/docs/source | hf_public_repos/transformers/docs/source/it/perf_train_tpu.md | <!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Addestramento su TPU
<Tip>
Nota: Molte delle strategie introdotte nella [sezione sulla GPU singola](perf_train_gpu_one) (come mixed precision training o gradient accumulation) e [sezione multi-GPU](perf_train_gpu_many) sono generiche e applicabili all'addestramento di modelli in generale quindi assicurati di dargli un'occhiata prima di immergerti in questa sezione.
</Tip>
Questo documento sarà presto completato con informazioni su come effettuare la formazione su TPU.
| 0 |
hf_public_repos/transformers/docs/source | hf_public_repos/transformers/docs/source/it/preprocessing.md | <!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Preprocess
[[open-in-colab]]
Prima di poter usare i dati in un modello, bisogna processarli in un formato accettabile per quest'ultimo. Un modello non comprende il testo grezzo, le immagini o l'audio. Bisogna convertire questi input in numeri e assemblarli all'interno di tensori. In questa esercitazione, tu potrai:
* Preprocessare dati testuali con un tokenizer.
* Preprocessare immagini o dati audio con un estrattore di caratteristiche.
* Preprocessare dati per attività multimodali mediante un processore.
## NLP
<Youtube id="Yffk5aydLzg"/>
Lo strumento principale per processare dati testuali Ú un [tokenizer](main_classes/tokenizer). Un tokenizer inizia separando il testo in *tokens* secondo una serie di regole. I tokens sono convertiti in numeri, questi vengono utilizzati per costruire i tensori di input del modello. Anche altri input addizionali se richiesti dal modello vengono aggiunti dal tokenizer.
<Tip>
Se stai pensando si utilizzare un modello preaddestrato, Ú importante utilizzare il tokenizer preaddestrato associato. Questo assicura che il testo sia separato allo stesso modo che nel corpus usato per l'addestramento, e venga usata la stessa mappatura tokens-to-index (solitamente indicato come il *vocabolario*) come nel preaddestramento.
</Tip>
Iniziamo subito caricando un tokenizer preaddestrato con la classe [`AutoTokenizer`]. Questo scarica il *vocabolario* usato quando il modello Ú stato preaddestrato.
### Tokenize
Carica un tokenizer preaddestrato con [`AutoTokenizer.from_pretrained`]:
```py
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
```
Poi inserisci le tue frasi nel tokenizer:
```py
>>> encoded_input = tokenizer("Do not meddle in the affairs of wizards, for they are subtle and quick to anger.")
>>> print(encoded_input)
{'input_ids': [101, 2079, 2025, 19960, 10362, 1999, 1996, 3821, 1997, 16657, 1010, 2005, 2027, 2024, 11259, 1998, 4248, 2000, 4963, 1012, 102],
'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]}
```
Il tokenizer restituisce un dizionario contenente tre oggetti importanti:
* [input_ids](glossary#input-ids) sono gli indici che corrispondono ad ogni token nella frase.
* [attention_mask](glossary#attention-mask) indicata se un token deve essere elaborato o no.
* [token_type_ids](glossary#token-type-ids) identifica a quale sequenza appartiene un token se Ú presente più di una sequenza.
Si possono decodificare gli `input_ids` per farsi restituire l'input originale:
```py
>>> tokenizer.decode(encoded_input["input_ids"])
'[CLS] Do not meddle in the affairs of wizards, for they are subtle and quick to anger. [SEP]'
```
Come si può vedere, il tokenizer aggiunge due token speciali - `CLS` e `SEP` (classificatore e separatore) - alla frase. Non tutti i modelli hanno bisogno dei token speciali, ma se servono, il tokenizer li aggiungerà automaticamente.
Se ci sono più frasi che vuoi processare, passale come una lista al tokenizer:
```py
>>> batch_sentences = [
... "But what about second breakfast?",
... "Don't think he knows about second breakfast, Pip.",
... "What about elevensies?",
... ]
>>> encoded_inputs = tokenizer(batch_sentences)
>>> print(encoded_inputs)
{'input_ids': [[101, 1252, 1184, 1164, 1248, 6462, 136, 102],
[101, 1790, 112, 189, 1341, 1119, 3520, 1164, 1248, 6462, 117, 21902, 1643, 119, 102],
[101, 1327, 1164, 5450, 23434, 136, 102]],
'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0]],
'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1]]}
```
### Pad
Questo Ú un argomento importante. Quando processi un insieme di frasi potrebbero non avere tutte la stessa lunghezza. Questo Ú un problema perchÚ i tensori, in input del modello, devono avere dimensioni uniformi. Il padding Ú una strategia per assicurarsi che i tensori siano rettangolari aggiungendo uno speciale *padding token* alle frasi più corte.
Imposta il parametro `padding` a `True` per imbottire le frasi più corte nel gruppo in modo che combacino con la massima lunghezza presente:
```py
>>> batch_sentences = [
... "But what about second breakfast?",
... "Don't think he knows about second breakfast, Pip.",
... "What about elevensies?",
... ]
>>> encoded_input = tokenizer(batch_sentences, padding=True)
>>> print(encoded_input)
{'input_ids': [[101, 1252, 1184, 1164, 1248, 6462, 136, 102, 0, 0, 0, 0, 0, 0, 0],
[101, 1790, 112, 189, 1341, 1119, 3520, 1164, 1248, 6462, 117, 21902, 1643, 119, 102],
[101, 1327, 1164, 5450, 23434, 136, 102, 0, 0, 0, 0, 0, 0, 0, 0]],
'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]],
'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0]]}
```
Nota che il tokenizer aggiunge alle sequenze degli `0` perchÚ sono troppo corte!
### Truncation
L'altra faccia della medaglia Ú che avolte le sequenze possono essere troppo lunghe per essere gestite dal modello. In questo caso, avrai bisogno di troncare la sequenza per avere una lunghezza minore.
Imposta il parametro `truncation` a `True` per troncare una sequenza alla massima lunghezza accettata dal modello:
```py
>>> batch_sentences = [
... "But what about second breakfast?",
... "Don't think he knows about second breakfast, Pip.",
... "What about elevensies?",
... ]
>>> encoded_input = tokenizer(batch_sentences, padding=True, truncation=True)
>>> print(encoded_input)
{'input_ids': [[101, 1252, 1184, 1164, 1248, 6462, 136, 102, 0, 0, 0, 0, 0, 0, 0],
[101, 1790, 112, 189, 1341, 1119, 3520, 1164, 1248, 6462, 117, 21902, 1643, 119, 102],
[101, 1327, 1164, 5450, 23434, 136, 102, 0, 0, 0, 0, 0, 0, 0, 0]],
'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]],
'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0]]}
```
### Costruire i tensori
Infine, vuoi che il tokenizer restituisca i tensori prodotti dal modello.
Imposta il parametro `return_tensors` su `pt` per PyTorch, o `tf` per TensorFlow:
```py
>>> batch_sentences = [
... "But what about second breakfast?",
... "Don't think he knows about second breakfast, Pip.",
... "What about elevensies?",
... ]
>>> encoded_input = tokenizer(batch, padding=True, truncation=True, return_tensors="pt")
>>> print(encoded_input)
{'input_ids': tensor([[ 101, 153, 7719, 21490, 1122, 1114, 9582, 1623, 102],
[ 101, 5226, 1122, 9649, 1199, 2610, 1236, 102, 0]]),
'token_type_ids': tensor([[0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0]]),
'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 0]])}
===PT-TF-SPLIT===
>>> batch_sentences = [
... "But what about second breakfast?",
... "Don't think he knows about second breakfast, Pip.",
... "What about elevensies?",
... ]
>>> encoded_input = tokenizer(batch, padding=True, truncation=True, return_tensors="tf")
>>> print(encoded_input)
{'input_ids': <tf.Tensor: shape=(2, 9), dtype=int32, numpy=
array([[ 101, 153, 7719, 21490, 1122, 1114, 9582, 1623, 102],
[ 101, 5226, 1122, 9649, 1199, 2610, 1236, 102, 0]],
dtype=int32)>,
'token_type_ids': <tf.Tensor: shape=(2, 9), dtype=int32, numpy=
array([[0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0]], dtype=int32)>,
'attention_mask': <tf.Tensor: shape=(2, 9), dtype=int32, numpy=
array([[1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 0]], dtype=int32)>}
```
## Audio
Gli input audio sono processati in modo differente rispetto al testo, ma l'obiettivo rimane lo stesso: creare sequenze numeriche che il modello può capire. Un [estrattore di caratteristiche](main_classes/feature_extractor) Ú progettato con lo scopo preciso di estrarre caratteristiche da immagini o dati audio grezzi e convertirli in tensori. Prima di iniziare, installa ð€ Datasets per caricare un dataset audio e sperimentare:
```bash
pip install datasets
```
Carica il dataset [MInDS-14](https://huggingface.co/datasets/PolyAI/minds14) (vedi il ð€ [Datasets tutorial](https://huggingface.co/docs/datasets/load_hub) per avere maggiori dettagli su come caricare un dataset):
```py
>>> from datasets import load_dataset, Audio
>>> dataset = load_dataset("PolyAI/minds14", name="en-US", split="train")
```
Accedi al primo elemento della colonna `audio` per dare uno sguardo all'input. Richiamando la colonna `audio` sarà caricato automaticamente e ricampionato il file audio:
```py
>>> dataset[0]["audio"]
{'array': array([ 0. , 0.00024414, -0.00024414, ..., -0.00024414,
0. , 0. ], dtype=float32),
'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~JOINT_ACCOUNT/602ba55abb1e6d0fbce92065.wav',
'sampling_rate': 8000}
```
Questo restituisce tre oggetti:
* `array` Ú il segnale vocale caricato - e potenzialmente ricampionato - come vettore 1D.
* `path` il percorso del file audio.
* `sampling_rate` si riferisce al numero di campioni del segnale vocale misurati al secondo.
### Ricampionamento
Per questo tutorial, puoi usare il modello [Wav2Vec2](https://huggingface.co/facebook/wav2vec2-base). Come puoi vedere dalla model card, il modello Wav2Vec2 Ú preaddestrato su un campionamento vocale a 16kHz.à importante che la frequenza di campionamento dei tuoi dati audio combaci con la frequenza di campionamento del dataset usato per preaddestrare il modello. Se la frequenza di campionamento dei tuoi dati non Ú uguale dovrai ricampionare i tuoi dati audio.
Per esempio, il dataset [MInDS-14](https://huggingface.co/datasets/PolyAI/minds14) ha una frequenza di campionamento di 8000kHz. Utilizzando il modello Wav2Vec2 su questo dataset, alzala a 16kHz:
```py
>>> dataset = load_dataset("PolyAI/minds14", name="en-US", split="train")
>>> dataset[0]["audio"]
{'array': array([ 0. , 0.00024414, -0.00024414, ..., -0.00024414,
0. , 0. ], dtype=float32),
'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~JOINT_ACCOUNT/602ba55abb1e6d0fbce92065.wav',
'sampling_rate': 8000}
```
1. Usa il metodo di ð€ Datasets' [`cast_column`](https://huggingface.co/docs/datasets/package_reference/main_classes#datasets.Dataset.cast_column) per alzare la frequenza di campionamento a 16kHz:
```py
>>> dataset = dataset.cast_column("audio", Audio(sampling_rate=16_000))
```
2. Carica il file audio:
```py
>>> dataset[0]["audio"]
{'array': array([ 2.3443763e-05, 2.1729663e-04, 2.2145823e-04, ...,
3.8356509e-05, -7.3497440e-06, -2.1754686e-05], dtype=float32),
'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~JOINT_ACCOUNT/602ba55abb1e6d0fbce92065.wav',
'sampling_rate': 16000}
```
Come puoi notare, la `sampling_rate` adesso Ú 16kHz!
### Feature extractor
Il prossimo passo Ú caricare un estrattore di caratteristiche per normalizzare e fare padding sull'input. Quando applichiamo il padding sui dati testuali, uno `0` Ú aggiunto alle sequenze più brevi. La stessa idea si applica ai dati audio, l'estrattore di caratteristiche per gli audio aggiungerà uno `0` - interpretato come silenzio - agli `array`.
Carica l'estrattore delle caratteristiche con [`AutoFeatureExtractor.from_pretrained`]:
```py
>>> from transformers import AutoFeatureExtractor
>>> feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/wav2vec2-base")
```
Inserisci l' `array` audio nell'estrattore delle caratteristiche. Noi raccomandiamo sempre di aggiungere il parametro `sampling_rate` nell'estrattore delle caratteristiche per correggere meglio qualche errore, dovuto ai silenzi, che potrebbe verificarsi.
```py
>>> audio_input = [dataset[0]["audio"]["array"]]
>>> feature_extractor(audio_input, sampling_rate=16000)
{'input_values': [array([ 3.8106556e-04, 2.7506407e-03, 2.8015103e-03, ...,
5.6335266e-04, 4.6588284e-06, -1.7142107e-04], dtype=float32)]}
```
### Pad e truncate
Come per il tokenizer, puoi applicare le operazioni padding o truncation per manipolare sequenze di variabili a lotti. Dai uno sguaro alla lunghezza delle sequenze di questi due campioni audio:
```py
>>> dataset[0]["audio"]["array"].shape
(173398,)
>>> dataset[1]["audio"]["array"].shape
(106496,)
```
Come puoi vedere, il primo campione ha una sequenza più lunga del secondo. Crea una funzione che preprocesserà il dataset. Specifica una lunghezza massima del campione, e l'estrattore di features si occuperà di riempire o troncare la sequenza per coincidervi:
```py
>>> def preprocess_function(examples):
... audio_arrays = [x["array"] for x in examples["audio"]]
... inputs = feature_extractor(
... audio_arrays,
... sampling_rate=16000,
... padding=True,
... max_length=100000,
... truncation=True,
... )
... return inputs
```
Applica la funzione ai primi esempi nel dataset:
```py
>>> processed_dataset = preprocess_function(dataset[:5])
```
Adesso guarda la lunghezza dei campioni elaborati:
```py
>>> processed_dataset["input_values"][0].shape
(100000,)
>>> processed_dataset["input_values"][1].shape
(100000,)
```
La lunghezza dei campioni adesso coincide con la massima lunghezza impostata nelle funzione.
## Vision
Un estrattore di caratteristiche si può usare anche per processare immagini e per compiti di visione. Ancora una volta, l'obiettivo Ú convertire l'immagine grezza in un lotto di tensori come input.
Carica il dataset [food101](https://huggingface.co/datasets/food101) per questa esercitazione. Usa il parametro `split` di ð€ Datasets per caricare solo un piccolo campione dal dataset di addestramento poichÚ il set di dati Ú molto grande:
```py
>>> from datasets import load_dataset
>>> dataset = load_dataset("food101", split="train[:100]")
```
Secondo passo, dai uno sguardo alle immagini usando la caratteristica [`Image`](https://huggingface.co/docs/datasets/package_reference/main_classes.html?highlight=image#datasets.Image) di ð€ Datasets:
```py
>>> dataset[0]["image"]
```

### Feature extractor
Carica l'estrattore di caratteristiche [`AutoFeatureExtractor.from_pretrained`]:
```py
>>> from transformers import AutoFeatureExtractor
>>> feature_extractor = AutoFeatureExtractor.from_pretrained("google/vit-base-patch16-224")
```
### Data augmentation
Per le attività di visione, Ú usuale aggiungere alcuni tipi di data augmentation alle immagini come parte del preprocessing. Puoi aggiungere augmentations con qualsiasi libreria che preferisci, ma in questa esercitazione, userai il modulo [`transforms`](https://pytorch.org/vision/stable/transforms.html) di torchvision.
1. Normalizza l'immagine e usa [`Compose`](https://pytorch.org/vision/master/generated/torchvision.transforms.Compose.html) per concatenare alcune trasformazioni - [`RandomResizedCrop`](https://pytorch.org/vision/main/generated/torchvision.transforms.RandomResizedCrop.html) e [`ColorJitter`](https://pytorch.org/vision/main/generated/torchvision.transforms.ColorJitter.html) - insieme:
```py
>>> from torchvision.transforms import Compose, Normalize, RandomResizedCrop, ColorJitter, ToTensor
>>> normalize = Normalize(mean=feature_extractor.image_mean, std=feature_extractor.image_std)
>>> _transforms = Compose(
... [RandomResizedCrop(feature_extractor.size), ColorJitter(brightness=0.5, hue=0.5), ToTensor(), normalize]
... )
```
2. Il modello accetta [`pixel_values`](model_doc/visionencoderdecoder#transformers.VisionEncoderDecoderModel.forward.pixel_values) come input. Questo valore Ú generato dall'estrattore di caratteristiche. Crea una funzione che genera `pixel_values` dai transforms:
```py
>>> def transforms(examples):
... examples["pixel_values"] = [_transforms(image.convert("RGB")) for image in examples["image"]]
... return examples
```
3. Poi utilizza ð€ Datasets [`set_transform`](https://huggingface.co/docs/datasets/process#format-transform)per applicare al volo la trasformazione:
```py
>>> dataset.set_transform(transforms)
```
4. Adesso quando accedi all'immagine, puoi notare che l'estrattore di caratteristiche ha aggiunto `pixel_values` allo schema di input:
```py
>>> dataset[0]["image"]
{'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=384x512 at 0x7F1A7B0630D0>,
'label': 6,
'pixel_values': tensor([[[ 0.0353, 0.0745, 0.1216, ..., -0.9922, -0.9922, -0.9922],
[-0.0196, 0.0667, 0.1294, ..., -0.9765, -0.9843, -0.9922],
[ 0.0196, 0.0824, 0.1137, ..., -0.9765, -0.9686, -0.8667],
...,
[ 0.0275, 0.0745, 0.0510, ..., -0.1137, -0.1216, -0.0824],
[ 0.0667, 0.0824, 0.0667, ..., -0.0588, -0.0745, -0.0980],
[ 0.0353, 0.0353, 0.0431, ..., -0.0039, -0.0039, -0.0588]],
[[ 0.2078, 0.2471, 0.2863, ..., -0.9451, -0.9373, -0.9451],
[ 0.1608, 0.2471, 0.3098, ..., -0.9373, -0.9451, -0.9373],
[ 0.2078, 0.2706, 0.3020, ..., -0.9608, -0.9373, -0.8275],
...,
[-0.0353, 0.0118, -0.0039, ..., -0.2392, -0.2471, -0.2078],
[ 0.0196, 0.0353, 0.0196, ..., -0.1843, -0.2000, -0.2235],
[-0.0118, -0.0039, -0.0039, ..., -0.0980, -0.0980, -0.1529]],
[[ 0.3961, 0.4431, 0.4980, ..., -0.9216, -0.9137, -0.9216],
[ 0.3569, 0.4510, 0.5216, ..., -0.9059, -0.9137, -0.9137],
[ 0.4118, 0.4745, 0.5216, ..., -0.9137, -0.8902, -0.7804],
...,
[-0.2314, -0.1922, -0.2078, ..., -0.4196, -0.4275, -0.3882],
[-0.1843, -0.1686, -0.2000, ..., -0.3647, -0.3804, -0.4039],
[-0.1922, -0.1922, -0.1922, ..., -0.2941, -0.2863, -0.3412]]])}
```
Di seguito come si vede l'immagine dopo la fase di preprocessing. Come ci si aspetterebbe dalle trasformazioni applicate, l'immagine Ú stata ritagliata in modo casuale e le proprietà del colore sono diverse.
```py
>>> import numpy as np
>>> import matplotlib.pyplot as plt
>>> img = dataset[0]["pixel_values"]
>>> plt.imshow(img.permute(1, 2, 0))
```

## Multimodal
Per attività multimodali userai una combinazione di tutto quello che hai imparato poco fa e applicherai le tue competenze alla comprensione automatica del parlato (Automatic Speech Recognition - ASR). Questo significa che avrai bisogno di:
* Un estrattore delle caratteristiche per processare i dati audio.
* Il Tokenizer per processare i testi.
Ritorna sul datasere [LJ Speech](https://huggingface.co/datasets/lj_speech):
```py
>>> from datasets import load_dataset
>>> lj_speech = load_dataset("lj_speech", split="train")
```
Visto che sei interessato solo alle colonne `audio` e `text`, elimina tutte le altre:
```py
>>> lj_speech = lj_speech.map(remove_columns=["file", "id", "normalized_text"])
```
Adesso guarda le colonne `audio` e `text`:
```py
>>> lj_speech[0]["audio"]
{'array': array([-7.3242188e-04, -7.6293945e-04, -6.4086914e-04, ...,
7.3242188e-04, 2.1362305e-04, 6.1035156e-05], dtype=float32),
'path': '/root/.cache/huggingface/datasets/downloads/extracted/917ece08c95cf0c4115e45294e3cd0dee724a1165b7fc11798369308a465bd26/LJSpeech-1.1/wavs/LJ001-0001.wav',
'sampling_rate': 22050}
>>> lj_speech[0]["text"]
'Printing, in the only sense with which we are at present concerned, differs from most if not from all the arts and crafts represented in the Exhibition'
```
Ricorda dalla sezione precedente sull'elaborazione dei dati audio, tu dovresti sempre [ricampionare](preprocessing#audio) la frequenza di campionamento dei tuoi dati audio per farla coincidere con quella del dataset usato dal modello preaddestrato:
```py
>>> lj_speech = lj_speech.cast_column("audio", Audio(sampling_rate=16_000))
```
### Processor
Un processor combina un estrattore di caratteristiche e un tokenizer. Carica un processor con [`AutoProcessor.from_pretrained]:
```py
>>> from transformers import AutoProcessor
>>> processor = AutoProcessor.from_pretrained("facebook/wav2vec2-base-960h")
```
1. Crea una funzione che processi i dati audio in `input_values`, e tokenizza il testo in `labels`. Questi sono i tuoi input per il modello:
```py
>>> def prepare_dataset(example):
... audio = example["audio"]
... example.update(processor(audio=audio["array"], text=example["text"], sampling_rate=16000))
... return example
```
2. Applica la funzione `prepare_dataset` ad un campione:
```py
>>> prepare_dataset(lj_speech[0])
```
Nota che il processor ha aggiunto `input_values` e `labels`. La frequenza di campionamento Ú stata corretta riducendola a 16kHz.
Fantastico, ora dovresti essere in grado di preelaborare i dati per qualsiasi modalità e persino di combinare modalità diverse! Nella prossima esercitazione, impareremo a mettere a punto un modello sui dati appena pre-elaborati. | 0 |
hf_public_repos/transformers/docs/source | hf_public_repos/transformers/docs/source/it/perf_infer_special.md | <!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Inferenza su Hardware Specializzato
Questo documento sarà completato a breve con la documentazione per l'inferenza su hardware specializzato. Nel frattempo puoi controllare [la guida per fare inferenza sulle CPU](perf_infer_cpu). | 0 |
hf_public_repos/transformers/docs/source | hf_public_repos/transformers/docs/source/it/model_sharing.md | <!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Condividi un modello
Gli ultimi due tutorial ti hanno mostrato come puoi fare fine-tuning di un modello con PyTorch, Keras e ð€ Accelerate per configurazioni distribuite. Il prossimo passo Ú quello di condividere il tuo modello con la community! In Hugging Face, crediamo nella condivisione della conoscenza e delle risorse in modo da democratizzare l'intelligenza artificiale per chiunque. Ti incoraggiamo a considerare di condividere il tuo modello con la community per aiutare altre persone a risparmiare tempo e risorse.
In questo tutorial, imparerai due metodi per la condivisione di un modello trained o fine-tuned nel [Model Hub](https://huggingface.co/models):
- Condividi in modo programmatico i tuoi file nell'Hub.
- Trascina i tuoi file nell'Hub mediante interfaccia grafica.
<iframe width="560" height="315" src="https://www.youtube.com/embed/XvSGPZFEjDY" title="YouTube video player"
frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope;
picture-in-picture" allowfullscreen></iframe>
<Tip>
Per condividere un modello con la community, hai bisogno di un account su [huggingface.co](https://huggingface.co/join). Puoi anche unirti ad un'organizzazione esistente o crearne una nuova.
</Tip>
## Caratteristiche dei repository
Ogni repository nel Model Hub si comporta come un tipico repository di GitHub. I nostri repository offrono il versionamento, la cronologia dei commit, e la possibilità di visualizzare le differenze.
Il versionamento all'interno del Model Hub Ú basato su git e [git-lfs](https://git-lfs.github.com/). In altre parole, puoi trattare un modello come un unico repository, consentendo un maggiore controllo degli accessi e maggiore scalabilità . Il controllo delle versioni consente *revisions*, un metodo per appuntare una versione specifica di un modello con un hash di commit, un tag o un branch.
Come risultato, puoi caricare una specifica versione di un modello con il parametro `revision`:
```py
>>> model = AutoModel.from_pretrained(
... "julien-c/EsperBERTo-small", revision="v2.0.1" # nome di un tag, di un branch, o commit hash
... )
```
Anche i file possono essere modificati facilmente in un repository ed Ú possibile visualizzare la cronologia dei commit e le differenze:

## Configurazione
Prima di condividere un modello nell'Hub, hai bisogno delle tue credenziali di Hugging Face. Se hai accesso ad un terminale, esegui il seguente comando nell'ambiente virtuale in cui Ú installata la libreria ð€ Transformers. Questo memorizzerà il tuo token di accesso nella cartella cache di Hugging Face (di default `~/.cache/`):
```bash
huggingface-cli login
```
Se stai usando un notebook come Jupyter o Colaboratory, assicurati di avere la libreria [`huggingface_hub`](https://huggingface.co/docs/hub/adding-a-library) installata. Questa libreria ti permette di interagire in maniera programmatica con l'Hub.
```bash
pip install huggingface_hub
```
Utilizza `notebook_login` per accedere all'Hub, e segui il link [qui](https://huggingface.co/settings/token) per generare un token con cui effettuare il login:
```py
>>> from huggingface_hub import notebook_login
>>> notebook_login()
```
## Converti un modello per tutti i framework
Per assicurarti che il tuo modello possa essere utilizzato da persone che lavorano con un framework differente, ti raccomandiamo di convertire e caricare il tuo modello sia con i checkpoint di PyTorch che con quelli di TensorFlow. Anche se Ú possibile caricare il modello da un framework diverso, se si salta questo passaggio, il caricamento sarà più lento perché ð€ Transformers ha bisogno di convertire i checkpoint al momento.
Convertire un checkpoint per un altro framework Ú semplice. Assicurati di avere PyTorch e TensorFlow installati (vedi [qui](installation) per le istruzioni d'installazione), e poi trova il modello specifico per il tuo compito nell'altro framework.
<frameworkcontent>
<pt>
Specifica `from_tf=True` per convertire un checkpoint da TensorFlow a PyTorch:
```py
>>> pt_model = DistilBertForSequenceClassification.from_pretrained(
... "path/verso/il-nome-magnifico-che-hai-scelto", from_tf=True
... )
>>> pt_model.save_pretrained("path/verso/il-nome-magnifico-che-hai-scelto")
```
</pt>
<tf>
Specifica `from_pt=True` per convertire un checkpoint da PyTorch a TensorFlow:
```py
>>> tf_model = TFDistilBertForSequenceClassification.from_pretrained(
... "path/verso/il-nome-magnifico-che-hai-scelto", from_pt=True
... )
```
Poi puoi salvare il tuo nuovo modello in TensorFlow con il suo nuovo checkpoint:
```py
>>> tf_model.save_pretrained("path/verso/il-nome-magnifico-che-hai-scelto")
```
</tf>
<jax>
Se un modello Ú disponibile in Flax, puoi anche convertire un checkpoint da PyTorch a Flax:
```py
>>> flax_model = FlaxDistilBertForSequenceClassification.from_pretrained(
... "path/verso/il-nome-magnifico-che-hai-scelto", from_pt=True
... )
```
</jax>
</frameworkcontent>
## Condividi un modello durante il training
<frameworkcontent>
<pt>
<Youtube id="Z1-XMy-GNLQ"/>
Condividere un modello nell'Hub Ú tanto semplice quanto aggiungere un parametro extra o un callback. Ricorda dal [tutorial sul fine-tuning](training), la classe [`TrainingArguments`] Ú dove specifichi gli iperparametri e le opzioni addizionali per l'allenamento. Una di queste opzioni di training include l'abilità di condividere direttamente un modello nell'Hub. Imposta `push_to_hub=True` in [`TrainingArguments`]:
```py
>>> training_args = TrainingArguments(output_dir="il-mio-bellissimo-modello", push_to_hub=True)
```
Passa gli argomenti per il training come di consueto al [`Trainer`]:
```py
>>> trainer = Trainer(
... model=model,
... args=training_args,
... train_dataset=small_train_dataset,
... eval_dataset=small_eval_dataset,
... compute_metrics=compute_metrics,
... )
```
Dopo aver effettuato il fine-tuning del tuo modello, chiama [`~transformers.Trainer.push_to_hub`] sul [`Trainer`] per condividere il modello allenato nell'Hub. ð€ Transformers aggiungerà in modo automatico persino gli iperparametri, i risultati del training e le versioni del framework alla scheda del tuo modello (model card, in inglese)!
```py
>>> trainer.push_to_hub()
```
</pt>
<tf>
Condividi un modello nell'Hub con [`PushToHubCallback`]. Nella funzione [`PushToHubCallback`], aggiungi:
- Una directory di output per il tuo modello.
- Un tokenizer.
- L'`hub_model_id`, che Ú il tuo username sull'Hub e il nome del modello.
```py
>>> from transformers import PushToHubCallback
>>> push_to_hub_callback = PushToHubCallback(
... output_dir="./il_path_dove_salvare_il_tuo_modello",
... tokenizer=tokenizer,
... hub_model_id="il-tuo-username/il-mio-bellissimo-modello",
... )
```
Aggiungi il callback a [`fit`](https://keras.io/api/models/model_training_apis/), e ð€ Transformers caricherà il modello allenato nell'Hub:
```py
>>> model.fit(tf_train_dataset, validation_data=tf_validation_dataset, epochs=3, callbacks=push_to_hub_callback)
```
</tf>
</frameworkcontent>
## Utilizzare la funzione `push_to_hub`
Puoi anche chiamare `push_to_hub` direttamente sul tuo modello per caricarlo nell'Hub.
Specifica il nome del tuo modello in `push_to_hub`:
```py
>>> pt_model.push_to_hub("il-mio-bellissimo-modello")
```
Questo crea un repository sotto il proprio username con il nome del modello `il-mio-bellissimo-modello`. Ora chiunque può caricare il tuo modello con la funzione `from_pretrained`:
```py
>>> from transformers import AutoModel
>>> model = AutoModel.from_pretrained("il-tuo-username/il-mio-bellissimo-modello")
```
Se fai parte di un'organizzazione e vuoi invece condividere un modello sotto il nome dell'organizzazione, aggiungi il parametro `organization`:
```py
>>> pt_model.push_to_hub("il-mio-bellissimo-modello", organization="la-mia-fantastica-org")
```
La funzione `push_to_hub` può essere anche utilizzata per aggiungere altri file al repository del modello. Per esempio, aggiungi un tokenizer ad un repository di un modello:
```py
>>> tokenizer.push_to_hub("il-mio-bellissimo-modello")
```
O magari potresti voler aggiungere la versione di TensorFlow del tuo modello PyTorch a cui hai fatto fine-tuning:
```py
>>> tf_model.push_to_hub("il-mio-bellissimo-modello")
```
Ora quando navighi nel tuo profilo Hugging Face, dovresti vedere il tuo repository del modello appena creato. Premendo sulla scheda **Files** vengono visualizzati tutti i file caricati nel repository.
Per maggiori dettagli su come creare e caricare file ad un repository, fai riferimento alla documentazione [qui](https://huggingface.co/docs/hub/how-to-upstream).
## Carica un modello utilizzando l'interfaccia web
Chi preferisce un approccio senza codice può caricare un modello tramite l'interfaccia web dell'hub. Visita [huggingface.co/new](https://huggingface.co/new) per creare un nuovo repository:

Da qui, aggiungi alcune informazioni sul tuo modello:
- Seleziona il/la **owner** del repository. Puoi essere te o qualunque organizzazione di cui fai parte.
- Scegli un nome per il tuo modello, il quale sarà anche il nome del repository.
- Scegli se il tuo modello Ú pubblico o privato.
- Specifica la licenza utilizzata per il tuo modello.
Ora premi sulla scheda **Files** e premi sul pulsante **Add file** per caricare un nuovo file al tuo repository. Trascina poi un file per caricarlo e aggiungere un messaggio di commit.

## Aggiungi una scheda del modello
Per assicurarti che chiunque possa comprendere le abilità , limitazioni, i potenziali bias e le considerazioni etiche del tuo modello, per favore aggiungi una scheda del modello (model card, in inglese) al tuo repository. La scheda del modello Ú definita nel file `README.md`. Puoi aggiungere una scheda del modello:
* Creando manualmente e caricando un file `README.md`.
* Premendo sul pulsante **Edit model card** nel repository del tuo modello.
Dai un'occhiata alla [scheda del modello](https://huggingface.co/distilbert-base-uncased) di DistilBert per avere un buon esempio del tipo di informazioni che una scheda di un modello deve includere. Per maggiori dettagli legati ad altre opzioni che puoi controllare nel file `README.md`, come l'impatto ambientale o widget di esempio, fai riferimento alla documentazione [qui](https://huggingface.co/docs/hub/models-cards).
| 0 |
hf_public_repos/transformers/docs/source | hf_public_repos/transformers/docs/source/it/perf_infer_gpu_one.md | <!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Inferenza efficiente su GPU singola
Questo documento sarà presto completato con informazioni su come effetture l'inferenza su una singola GPU. Nel frattempo Ú possibile consultare [la guida per l'addestramento su una singola GPU](perf_train_gpu_one) e [la guida per l'inferenza su CPU](perf_infer_cpu).
## `BetterTransformer` per l'inferenza più veloce
Abbiamo recentemente integrato `BetterTransformer` per velocizzare l'inferenza su GPU per modelli di testo, immagini e audio. Per maggiori dettagli, consultare la documentazione su questa integrazione [qui](https://huggingface.co/docs/optimum/bettertransformer/overview).
## Integrazione di `bitsandbytes` per Int8 mixed-precision matrix decomposition
<Tip>
Nota che questa funzione può essere utilizzata anche nelle configurazioni multi GPU.
</Tip>
Dal paper [`LLM.int8() : 8-bit Matrix Multiplication for Transformers at Scale`](https://arxiv.org/abs/2208.07339), noi supportiamo l'integrazione di Hugging Face per tutti i modelli dell'Hub con poche righe di codice.
Il metodo `nn.Linear` riduce la dimensione di 2 per i pesi `float16` e `bfloat16` e di 4 per i pesi `float32`, con un impatto quasi nullo sulla qualità , operando sugli outlier in half-precision.

Il metodo Int8 mixed-precision matrix decomposition funziona separando la moltiplicazione tra matrici in due flussi: (1) una matrice di flusso di outlier di caratteristiche sistematiche moltiplicata in fp16, (2) in flusso regolare di moltiplicazione di matrici int8 (99,9%). Con questo metodo, Ú possibile effettutare inferenza int8 per modelli molto grandi senza degrado predittivo.
Per maggiori dettagli sul metodo, consultare il [paper](https://arxiv.org/abs/2208.07339) o il nostro [blogpost sull'integrazione](https://huggingface.co/blog/hf-bitsandbytes-integration).

Nota che Ú necessaria una GPU per eseguire modelli di tipo mixed-8bit, poiché i kernel sono stati compilati solo per le GPU. Prima di utilizzare questa funzione, assicurarsi di disporre di memoria sufficiente sulla GPU per memorizzare un quarto del modello (o la metà se i pesi del modello sono in mezza precisione).
Di seguito sono riportate alcune note per aiutarvi a utilizzare questo modulo, oppure seguite le dimostrazioni su [Google colab](#colab-demos).
### Requisiti
- Se si dispone di `bitsandbytes<0.37.0`, assicurarsi di eseguire su GPU NVIDIA che supportano tensor cores a 8 bit (Turing, Ampere o architetture più recenti - ad esempio T4, RTX20s RTX30s, A40-A100). Per `bitsandbytes>=0.37.0`, tutte le GPU dovrebbero essere supportate.
- Installare la versione corretta di `bitsandbytes` eseguendo:
`pip install bitsandbytes>=0.31.5`.
- Installare `accelerate`
`pip install accelerate>=0.12.0`
### Esecuzione di modelli mixed-Int8 - configurazione per singola GPU
Dopo aver installato le librerie necessarie, per caricare il tuo modello mixed 8-bit Ú il seguente:
```py
from transformers import AutoModelForCausalLM
model_name = "bigscience/bloom-2b5"
model_8bit = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", load_in_8bit=True)
```
Per la generazione di testo, si consiglia di:
* utilizzare il metodo `generate()` del modello invece della funzione `pipeline()`. Sebbene l'inferenza sia possibile con la funzione `pipeline()`, essa non Ú ottimizzata per i modelli mixed-8bit e sarà più lenta rispetto all'uso del metodo `generate()`. Inoltre, alcune strategie di campionamento, come il campionamento nucleaus, non sono supportate dalla funzione `pipeline()` per i modelli mixed-8bit.
* collocare tutti gli ingressi sullo stesso dispositivo del modello.
Ecco un semplice esempio:
```py
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "bigscience/bloom-2b5"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model_8bit = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", load_in_8bit=True)
text = "Hello, my llama is cute"
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
generated_ids = model.generate(**inputs)
outputs = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
```
### Esecuzione di modelli mixed-8bit - configurazione multi GPU
Usare il seguente modo caricare il modello mixed-8bit su più GPU (stesso comando della configurazione a GPU singola):
```py
model_name = "bigscience/bloom-2b5"
model_8bit = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", load_in_8bit=True)
```
Puoi controllare la RAM della GPU che si vuole allocare su ogni GPU usando `accelerate`. Utilizzare l'argomento `max_memory` come segue:
```py
max_memory_mapping = {0: "1GB", 1: "2GB"}
model_name = "bigscience/bloom-3b"
model_8bit = AutoModelForCausalLM.from_pretrained(
model_name, device_map="auto", load_in_8bit=True, max_memory=max_memory_mapping
)
```
In questo esempio, la prima GPU utilizzerà 1 GB di memoria e la seconda 2 GB.
### Colab demos
Con questo metodo Ú possibile inferire modelli che prima non era possibile inferire su Google Colab.
Guardate la demo per l'esecuzione di T5-11b (42GB in fp32)! Utilizzo la quantizzazione a 8 bit su Google Colab:
[](https://colab.research.google.com/drive/1YORPWx4okIHXnjW7MSAidXN29mPVNT7F?usp=sharing)
Oppure questa demo di BLOOM-3B:
[](https://colab.research.google.com/drive/1qOjXfQIAULfKvZqwCen8-MoWKGdSatZ4?usp=sharing) | 0 |
hf_public_repos/transformers/docs/source | hf_public_repos/transformers/docs/source/it/perf_train_cpu_many.md | <!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Addestramento effciente su multiple CPU
Quando l'addestramento su una singola CPU Ú troppo lento, possiamo usare CPU multiple. Quasta guida si concentra su DDP basato su PyTorch abilitando l'addetramento distribuito su CPU in maniera efficiente.
## Intel® oneCCL Bindings per PyTorch
[Intel® oneCCL](https://github.com/oneapi-src/oneCCL) (collective communications library) Ú una libreria per l'addestramento efficiente del deep learning in distribuito e implementa collettivi come allreduce, allgather, alltoall. Per maggiori informazioni su oneCCL, fai riferimento a [oneCCL documentation](https://spec.oneapi.com/versions/latest/elements/oneCCL/source/index.html) e [oneCCL specification](https://spec.oneapi.com/versions/latest/elements/oneCCL/source/index.html).
Il modulo `oneccl_bindings_for_pytorch` (`torch_ccl` precedentemente alla versione 1.12) implementa PyTorch C10D ProcessGroup API e può essere caricato dinamicamente com external ProcessGroup e funziona solo su piattaforma Linux al momento.
Qui trovi informazioni più dettagliate per [oneccl_bind_pt](https://github.com/intel/torch-ccl).
### Intel® oneCCL Bindings per l'installazione PyTorch:
I file wheel sono disponibili per le seguenti versioni di Python:
| Extension Version | Python 3.6 | Python 3.7 | Python 3.8 | Python 3.9 | Python 3.10 |
| :---------------: | :--------: | :--------: | :--------: | :--------: | :---------: |
| 1.13.0 | | â | â | â | â |
| 1.12.100 | | â | â | â | â |
| 1.12.0 | | â | â | â | â |
| 1.11.0 | | â | â | â | â |
| 1.10.0 | â | â | â | â | |
```bash
pip install oneccl_bind_pt=={pytorch_version} -f https://developer.intel.com/ipex-whl-stable-cpu
```
dove `{pytorch_version}` deve essere la tua versione di PyTorch, per l'stanza 1.13.0.
Verifica altri approcci per [oneccl_bind_pt installation](https://github.com/intel/torch-ccl).
Le versioni di oneCCL e PyTorch devono combaciare.
<Tip warning={true}>
oneccl_bindings_for_pytorch 1.12.0 prebuilt wheel does not work with PyTorch 1.12.1 (it is for PyTorch 1.12.0)
PyTorch 1.12.1 should work with oneccl_bindings_for_pytorch 1.12.100
</Tip>
## Intel® MPI library
Usa questa implementazione basata su standard MPI per fornire una architettura flessibile, efficiente, scalabile su cluster per Intel®. Questo componente Ú parte di Intel® oneAPI HPC Toolkit.
oneccl_bindings_for_pytorch Ú installato insieme al set di strumenti MPI. Necessità di reperire l'ambiente prima di utilizzarlo.
per Intel® oneCCL >= 1.12.0
```bash
oneccl_bindings_for_pytorch_path=$(python -c "from oneccl_bindings_for_pytorch import cwd; print(cwd)")
source $oneccl_bindings_for_pytorch_path/env/setvars.sh
```
per Intel® oneCCL con versione < 1.12.0
```bash
torch_ccl_path=$(python -c "import torch; import torch_ccl; import os; print(os.path.abspath(os.path.dirname(torch_ccl.__file__)))")
source $torch_ccl_path/env/setvars.sh
```
#### Installazione IPEX:
IPEX fornisce ottimizzazioni delle prestazioni per l'addestramento della CPU sia con Float32 che con BFloat16; puoi fare riferimento a [single CPU section](./perf_train_cpu).
Il seguente "Utilizzo in Trainer" prende come esempio mpirun nella libreria Intel® MPI.
## Utilizzo in Trainer
Per abilitare l'addestramento distribuito multi CPU nel Trainer con il ccl backend, gli utenti devono aggiungere **`--ddp_backend ccl`** negli argomenti del comando.
Vediamo un esempio per il [question-answering example](https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering)
Il seguente comando abilita due processi sul nodo Xeon, con un processo in esecuzione per ogni socket. Le variabili OMP_NUM_THREADS/CCL_WORKER_COUNT possono essere impostate per una prestazione ottimale.
```shell script
export CCL_WORKER_COUNT=1
export MASTER_ADDR=127.0.0.1
mpirun -n 2 -genv OMP_NUM_THREADS=23 \
python3 run_qa.py \
--model_name_or_path bert-large-uncased \
--dataset_name squad \
--do_train \
--do_eval \
--per_device_train_batch_size 12 \
--learning_rate 3e-5 \
--num_train_epochs 2 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir /tmp/debug_squad/ \
--no_cuda \
--ddp_backend ccl \
--use_ipex
```
Il seguente comando abilita l'addestramento per un totale di quattro processi su due Xeon (node0 e node1, prendendo node0 come processo principale), ppn (processes per node) Ú impostato a 2, on un processo in esecuzione per ogni socket. Le variabili OMP_NUM_THREADS/CCL_WORKER_COUNT possono essere impostate per una prestazione ottimale.
In node0, Ú necessario creare un file di configurazione che contenga gli indirizzi IP di ciascun nodo (per esempio hostfile) e passare il percorso del file di configurazione come parametro.
```shell script
cat hostfile
xxx.xxx.xxx.xxx #node0 ip
xxx.xxx.xxx.xxx #node1 ip
```
A questo punto, esegui il seguente comando nel nodo0 e **4DDP** sarà abilitato in node0 e node1 con BF16 auto mixed precision:
```shell script
export CCL_WORKER_COUNT=1
export MASTER_ADDR=xxx.xxx.xxx.xxx #node0 ip
mpirun -f hostfile -n 4 -ppn 2 \
-genv OMP_NUM_THREADS=23 \
python3 run_qa.py \
--model_name_or_path bert-large-uncased \
--dataset_name squad \
--do_train \
--do_eval \
--per_device_train_batch_size 12 \
--learning_rate 3e-5 \
--num_train_epochs 2 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir /tmp/debug_squad/ \
--no_cuda \
--ddp_backend ccl \
--use_ipex \
--bf16
```
| 0 |
hf_public_repos/transformers/docs/source | hf_public_repos/transformers/docs/source/it/accelerate.md | <!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Allenamento distribuito con ð€ Accelerate
La parallelizzazione Ú emersa come strategia per allenare modelli sempre più grandi su hardware limitato e accelerarne la velocità di allenamento di diversi ordini di magnitudine. In Hugging Face, abbiamo creato la libreria [ð€ Accelerate](https://huggingface.co/docs/accelerate) per aiutarti ad allenare in modo semplice un modello ð€ Transformers su qualsiasi tipo di configurazione distribuita, sia che si tratti di più GPU su una sola macchina o di più GPU su più macchine. In questo tutorial, imparerai come personalizzare il training loop nativo di PyTorch per consentire l'addestramento in un ambiente distribuito.
## Configurazione
Inizia installando ð€ Accelerate:
```bash
pip install accelerate
```
Poi importa e crea un oggetto [`Accelerator`](https://huggingface.co/docs/accelerate/package_reference/accelerator#accelerate.Accelerator). `Accelerator` rileverà automaticamente il tuo setup distribuito e inizializzerà tutte le componenti necessarie per l'allenamento. Non dovrai allocare esplicitamente il tuo modello su un device.
```py
>>> from accelerate import Accelerator
>>> accelerator = Accelerator()
```
## Preparati ad accelerare
Il prossimo passo Ú quello di passare tutti gli oggetti rilevanti per l'allenamento al metodo [`prepare`](https://huggingface.co/docs/accelerate/package_reference/accelerator#accelerate.Accelerator.prepare). Questo include i tuoi DataLoaders per l'allenamento e per la valutazione, un modello e un ottimizzatore:
```py
>>> train_dataloader, eval_dataloader, model, optimizer = accelerator.prepare(
... train_dataloader, eval_dataloader, model, optimizer
... )
```
## Backward
Infine, sostituisci il tipico metodo `loss.backward()` nel tuo loop di allenamento con il metodo [`backward`](https://huggingface.co/docs/accelerate/package_reference/accelerator#accelerate.Accelerator.backward) di ð€ Accelerate:
```py
>>> for epoch in range(num_epochs):
... for batch in train_dataloader:
... outputs = model(**batch)
... loss = outputs.loss
... accelerator.backward(loss)
... optimizer.step()
... lr_scheduler.step()
... optimizer.zero_grad()
... progress_bar.update(1)
```
Come puoi vedere nel seguente codice, hai solo bisogno di aggiungere quattro righe in più di codice al tuo training loop per abilitare l'allenamento distribuito!
```diff
+ from accelerate import Accelerator
from transformers import AdamW, AutoModelForSequenceClassification, get_scheduler
+ accelerator = Accelerator()
model = AutoModelForSequenceClassification.from_pretrained(checkpoint, num_labels=2)
optimizer = AdamW(model.parameters(), lr=3e-5)
- device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
- model.to(device)
+ train_dataloader, eval_dataloader, model, optimizer = accelerator.prepare(
+ train_dataloader, eval_dataloader, model, optimizer
+ )
num_epochs = 3
num_training_steps = num_epochs * len(train_dataloader)
lr_scheduler = get_scheduler(
"linear",
optimizer=optimizer,
num_warmup_steps=0,
num_training_steps=num_training_steps
)
progress_bar = tqdm(range(num_training_steps))
model.train()
for epoch in range(num_epochs):
for batch in train_dataloader:
- batch = {k: v.to(device) for k, v in batch.items()}
outputs = model(**batch)
loss = outputs.loss
- loss.backward()
+ accelerator.backward(loss)
optimizer.step()
lr_scheduler.step()
optimizer.zero_grad()
progress_bar.update(1)
```
## Allenamento
Una volta che hai aggiunto le righe di codice rilevanti, lancia il tuo allenamento in uno script o in un notebook come Colaboratory.
### Allenamento con uno script
Se stai eseguendo il tuo allenamento da uno script, esegui il comando seguente per creare e salvare un file di configurazione:
```bash
accelerate config
```
Poi lancia il tuo allenamento con:
```bash
accelerate launch train.py
```
### Allenamento con un notebook
La libreria ð€ Accelerate può anche essere utilizzata in un notebook se stai pianificando di utilizzare le TPU di Colaboratory. Inserisci tutto il codice legato all'allenamento in una funzione, e passala al `notebook_launcher`:
```py
>>> from accelerate import notebook_launcher
>>> notebook_launcher(training_function)
```
Per maggiori informazioni relative a ð€ Accelerate e le sue numerose funzionalità , fai riferimento alla [documentazione](https://huggingface.co/docs/accelerate). | 0 |
hf_public_repos/transformers/docs/source | hf_public_repos/transformers/docs/source/it/community.md | <!--â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# ComunitÃ
Questa pagina raggruppa le risorse sviluppate dalla comunità riguardo ð€ Transformers.
## Risorse della comunità :
| Risorsa | Descrizione | Autore |
|:----------|:-------------|------:|
| [Glossario delle Flashcards di Transformers](https://www.darigovresearch.com/huggingface-transformers-glossary-flashcards) | Un insieme di flashcards basate sul [glossario della documentazione di Transformers](glossary), creato in un formato tale da permettere un facile apprendimento e revisione usando [Anki](https://apps.ankiweb.net/), un'applicazione open-source e multi-piattaforma, specificatamente progettata per ricordare informazioni nel lungo termine. Guarda questo [video introduttivo su come usare le flashcards](https://www.youtube.com/watch?v=Dji_h7PILrw). | [Darigov Research](https://www.darigovresearch.com/) |
## Notebook della comunità :
| Notebook | Descrizione | Autore | |
|:----------|:-------------|:-------------|------:|
| [Fine-tuning di un Transformer pre-addestrato, al fine di generare testi di canzoni](https://github.com/AlekseyKorshuk/huggingartists) | Come generare testi di canzoni nello stile del vostro artista preferito attraverso il fine-tuning di un modello GPT-2. | [Aleksey Korshuk](https://github.com/AlekseyKorshuk) | [](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb) |
| [Addestramento di T5 in Tensorflow 2 ](https://github.com/snapthat/TF-T5-text-to-text) | Come addestrare T5 per qualsiasi attività usando Tensorflow 2. Questo notebook mostra come risolvere l'attività di "Question Answering" usando Tensorflow 2 e SQUAD. | [Muhammad Harris](https://github.com/HarrisDePerceptron) |[](https://colab.research.google.com/github/snapthat/TF-T5-text-to-text/blob/master/snapthatT5/notebooks/TF-T5-Datasets%20Training.ipynb) |
| [Addestramento di T5 con TPU](https://github.com/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb) | Come addestrare T5 su SQUAD con Transformers e NLP. | [Suraj Patil](https://github.com/patil-suraj) |[](https://colab.research.google.com/github/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb#scrollTo=QLGiFCDqvuil) |
| [Fine-tuning di T5 per la classificazione e scelta multipla](https://github.com/patil-suraj/exploring-T5/blob/master/t5_fine_tuning.ipynb) | Come effettuare il fine-tuning di T5 per le attività di classificazione a scelta multipla - usando un formato testo-a-testo - con PyTorch Lightning. | [Suraj Patil](https://github.com/patil-suraj) | [](https://colab.research.google.com/github/patil-suraj/exploring-T5/blob/master/t5_fine_tuning.ipynb) |
| [Fine-tuning di DialoGPT su nuovi dataset e lingue](https://github.com/ncoop57/i-am-a-nerd/blob/master/_notebooks/2020-05-12-chatbot-part-1.ipynb) | Come effettuare il fine-tuning di un modello DialoGPT su un nuovo dataset per chatbots conversazionali open-dialog. | [Nathan Cooper](https://github.com/ncoop57) | [](https://colab.research.google.com/github/ncoop57/i-am-a-nerd/blob/master/_notebooks/2020-05-12-chatbot-part-1.ipynb) |
| [Modellamento di una lunga sequenza con Reformer](https://github.com/patrickvonplaten/notebooks/blob/master/PyTorch_Reformer.ipynb) | Come addestrare su sequenze di lunghezza fino a 500 mila token con Reformer. | [Patrick von Platen](https://github.com/patrickvonplaten) | [](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/PyTorch_Reformer.ipynb) |
| [Fine-tuning di BART per riassumere testi](https://github.com/ohmeow/ohmeow_website/blob/master/_notebooks/2020-05-23-text-generation-with-blurr.ipynb) | Come effettuare il fine-tuning di BART per riassumere testi con fastai usando blurr. | [Wayde Gilliam](https://ohmeow.com/) | [](https://colab.research.google.com/github/ohmeow/ohmeow_website/blob/master/_notebooks/2020-05-23-text-generation-with-blurr.ipynb) |
| [Fine-tuning di un Transformer pre-addestrato su tweet](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb) | Come generare tweet nello stile del tuo account Twitter preferito attraverso il fine-tuning di un modello GPT-2. | [Boris Dayma](https://github.com/borisdayma) | [](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb) |
| [Ottimizzazione di modelli ð€ Hugging Face con Weights & Biases](https://colab.research.google.com/github/wandb/examples/blob/master/colabs/huggingface/Optimize_Hugging_Face_models_with_Weights_%26_Biases.ipynb) | Un tutorial completo che mostra l'integrazione di W&B con Hugging Face. | [Boris Dayma](https://github.com/borisdayma) | [](https://colab.research.google.com/github/wandb/examples/blob/master/colabs/huggingface/Optimize_Hugging_Face_models_with_Weights_%26_Biases.ipynb) |
| [Longformer pre-addestrato](https://github.com/allenai/longformer/blob/master/scripts/convert_model_to_long.ipynb) | Come costruire una versione "long" degli esistenti modelli pre-addestrati. | [Iz Beltagy](https://beltagy.net) | [](https://colab.research.google.com/github/allenai/longformer/blob/master/scripts/convert_model_to_long.ipynb) |
| [Fine-tuning di Longformer per QA](https://github.com/patil-suraj/Notebooks/blob/master/longformer_qa_training.ipynb) | Come effettuare il fine-tuning di un modello longformer per un task di QA.| [Suraj Patil](https://github.com/patil-suraj) | [](https://colab.research.google.com/github/patil-suraj/Notebooks/blob/master/longformer_qa_training.ipynb) |
| [Valutazione di modelli con ð€NLP](https://github.com/patrickvonplaten/notebooks/blob/master/How_to_evaluate_Longformer_on_TriviaQA_using_NLP.ipynb) | Come valutare longformer su TriviaQA con `NLP`. | [Patrick von Platen](https://github.com/patrickvonplaten) | [](https://colab.research.google.com/drive/1m7eTGlPmLRgoPkkA7rkhQdZ9ydpmsdLE?usp=sharing) |
| [Fine-tuning di T5 per Sentiment Span Extraction](https://github.com/enzoampil/t5-intro/blob/master/t5_qa_training_pytorch_span_extraction.ipynb) | Come effettuare il fine-tuning di T5 per la sentiment span extraction - usando un formato testo-a-testo - con PyTorch Lightning. | [Lorenzo Ampil](https://github.com/enzoampil) | [](https://colab.research.google.com/github/enzoampil/t5-intro/blob/master/t5_qa_training_pytorch_span_extraction.ipynb) |
| [Fine-tuning di DistilBert per la classificazione multi-classe](https://github.com/abhimishra91/transformers-tutorials/blob/master/transformers_multiclass_classification.ipynb) | Come effettuare il fine-tuning di DistilBert per la classificazione multi-classe con PyTorch. | [Abhishek Kumar Mishra](https://github.com/abhimishra91) | [](https://colab.research.google.com/github/abhimishra91/transformers-tutorials/blob/master/transformers_multiclass_classification.ipynb)|
|[Fine-tuning di BERT per la classificazione multi-etichetta](https://github.com/abhimishra91/transformers-tutorials/blob/master/transformers_multi_label_classification.ipynb)|Come effettuare il fine-tuning di BERT per la classificazione multi-etichetta con PyTorch. |[Abhishek Kumar Mishra](https://github.com/abhimishra91) |[](https://colab.research.google.com/github/abhimishra91/transformers-tutorials/blob/master/transformers_multi_label_classification.ipynb)|
|[Accelerazione del fine-tuning con il Dynamic Padding / Bucketing](https://github.com/ELS-RD/transformers-notebook/blob/master/Divide_Hugging_Face_Transformers_training_time_by_2_or_more.ipynb)| Come velocizzare il fine-tuning di un fattore 2X usando il dynamic padding / bucketing. |[Michael Benesty](https://github.com/pommedeterresautee) |[](https://colab.research.google.com/drive/1CBfRU1zbfu7-ijiOqAAQUA-RJaxfcJoO?usp=sharing)|
|[Pre-addestramento di Reformer per Masked Language Modeling](https://github.com/patrickvonplaten/notebooks/blob/master/Reformer_For_Masked_LM.ipynb)| Come addestrare un modello Reformer usando livelli di self-attention bi-direzionali.| [Patrick von Platen](https://github.com/patrickvonplaten) | [](https://colab.research.google.com/drive/1tzzh0i8PgDQGV3SMFUGxM7_gGae3K-uW?usp=sharing)|
|[Espansione e fine-tuning di Sci-BERT](https://github.com/lordtt13/word-embeddings/blob/master/COVID-19%20Research%20Data/COVID-SciBERT.ipynb)| Come incrementare il vocabolario di un modello SciBERT - pre-addestrato da AllenAI sul dataset CORD - e crearne una pipeline. | [Tanmay Thakur](https://github.com/lordtt13) | [](https://colab.research.google.com/drive/1rqAR40goxbAfez1xvF3hBJphSCsvXmh8)|
|[Fine-tuning di BlenderBotSmall per riassumere testi usando Trainer API](https://github.com/lordtt13/transformers-experiments/blob/master/Custom%20Tasks/fine-tune-blenderbot_small-for-summarization.ipynb)| Come effettuare il fine-tuning di BlenderBotSmall per riassumere testi su un dataset personalizzato, usando Trainer API. | [Tanmay Thakur](https://github.com/lordtt13) | [](https://colab.research.google.com/drive/19Wmupuls7mykSGyRN_Qo6lPQhgp56ymq?usp=sharing)|
|[Fine-tuning di Electra e interpretazione con Integrated Gradients](https://github.com/elsanns/xai-nlp-notebooks/blob/master/electra_fine_tune_interpret_captum_ig.ipynb) | Come effettuare il fine-tuning di Electra per l'analisi dei sentimenti e intepretare le predizioni con Captum Integrated Gradients. | [Eliza Szczechla](https://elsanns.github.io) | [](https://colab.research.google.com/github/elsanns/xai-nlp-notebooks/blob/master/electra_fine_tune_interpret_captum_ig.ipynb)|
|[Fine-tuning di un modello GPT-2 non inglese con la classe Trainer](https://github.com/philschmid/fine-tune-GPT-2/blob/master/Fine_tune_a_non_English_GPT_2_Model_with_Huggingface.ipynb) | Come effettuare il fine-tuning di un modello GPT-2 non inglese con la classe Trainer. | [Philipp Schmid](https://www.philschmid.de) | [](https://colab.research.google.com/github/philschmid/fine-tune-GPT-2/blob/master/Fine_tune_a_non_English_GPT_2_Model_with_Huggingface.ipynb)|
|[Fine-tuning di un modello DistilBERT per la classficazione multi-etichetta](https://github.com/DhavalTaunk08/Transformers_scripts/blob/master/Transformers_multilabel_distilbert.ipynb) | Come effettuare il fine-tuning di un modello DistilBERT per l'attività di classificazione multi-etichetta. | [Dhaval Taunk](https://github.com/DhavalTaunk08) | [](https://colab.research.google.com/github/DhavalTaunk08/Transformers_scripts/blob/master/Transformers_multilabel_distilbert.ipynb)|
|[Fine-tuning di ALBERT per la classifcazione di coppie di frasi](https://github.com/NadirEM/nlp-notebooks/blob/master/Fine_tune_ALBERT_sentence_pair_classification.ipynb) | Come effettuare il fine-tuning di un modello ALBERT - o un altro modello BERT-based - per l'attività di classificazione di coppie di frasi. | [Nadir El Manouzi](https://github.com/NadirEM) | [](https://colab.research.google.com/github/NadirEM/nlp-notebooks/blob/master/Fine_tune_ALBERT_sentence_pair_classification.ipynb)|
|[Fine-tuning di Roberta per l'analisi di sentimenti](https://github.com/DhavalTaunk08/NLP_scripts/blob/master/sentiment_analysis_using_roberta.ipynb) | Come effettuare il fine-tuning di un modello Roberta per l'analisi di sentimenti. | [Dhaval Taunk](https://github.com/DhavalTaunk08) | [](https://colab.research.google.com/github/DhavalTaunk08/NLP_scripts/blob/master/sentiment_analysis_using_roberta.ipynb)|
|[Valutazione di modelli che generano domande](https://github.com/flexudy-pipe/qugeev) | Quanto sono accurante le risposte alle domande generate dal tuo modello transformer seq2seq? | [Pascal Zoleko](https://github.com/zolekode) | [](https://colab.research.google.com/drive/1bpsSqCQU-iw_5nNoRm_crPq6FRuJthq_?usp=sharing)|
|[Classificazione di testo con DistilBERT e Tensorflow](https://github.com/peterbayerle/huggingface_notebook/blob/main/distilbert_tf.ipynb) | Come effettuare il fine-tuning di DistilBERT per la classificazione di testo in TensorFlow. | [Peter Bayerle](https://github.com/peterbayerle) | [](https://colab.research.google.com/github/peterbayerle/huggingface_notebook/blob/main/distilbert_tf.ipynb)|
|[Utilizzo di BERT per riassumere testi con un modello Encoder-Decoder su CNN/Dailymail](https://github.com/patrickvonplaten/notebooks/blob/master/BERT2BERT_for_CNN_Dailymail.ipynb) | Come avviare "a caldo" un *EncoderDecoderModel* attraverso l'utilizzo di un checkpoint *bert-base-uncased* per riassumere testi su CNN/Dailymail. | [Patrick von Platen](https://github.com/patrickvonplaten) | [](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/BERT2BERT_for_CNN_Dailymail.ipynb)|
|[Utilizzo di RoBERTa per riassumere testi con un modello Encoder-Decoder su BBC XSum](https://github.com/patrickvonplaten/notebooks/blob/master/RoBERTaShared_for_BBC_XSum.ipynb) | Come avviare "a caldo" un *EncoderDecoderModel* (condiviso) attraverso l'utilizzo di un checkpoint *roberta-base* per riassumere testi su BBC/XSum. | [Patrick von Platen](https://github.com/patrickvonplaten) | [](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/RoBERTaShared_for_BBC_XSum.ipynb)|
|[Fine-tuning di TAPAS su Sequential Question Answering (SQA)](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/TAPAS/Fine_tuning_TapasForQuestionAnswering_on_SQA.ipynb) | Come effettuare il fine-tuning di un modello *TapasForQuestionAnswering* attraverso l'utilizzo di un checkpoint *tapas-base* sul dataset Sequential Question Answering (SQA). | [Niels Rogge](https://github.com/nielsrogge) | [](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/TAPAS/Fine_tuning_TapasForQuestionAnswering_on_SQA.ipynb)|
|[Valutazione di TAPAS su Table Fact Checking (TabFact)](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/TAPAS/Evaluating_TAPAS_on_the_Tabfact_test_set.ipynb) | Come valutare un modello *TapasForSequenceClassification* - fine-tuned con un checkpoint *tapas-base-finetuned-tabfact* - usando una combinazione delle librerie ð€ datasets e ð€ transformers. | [Niels Rogge](https://github.com/nielsrogge) | [](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/TAPAS/Evaluating_TAPAS_on_the_Tabfact_test_set.ipynb)|
|[Fine-tuning di mBART per la traduzione](https://colab.research.google.com/github/vasudevgupta7/huggingface-tutorials/blob/main/translation_training.ipynb) | Come effettuare il fine-tuning di mBART usando Seq2SeqTrainer per la traduzione da hindi a inglese.| [Vasudev Gupta](https://github.com/vasudevgupta7) | [](https://colab.research.google.com/github/vasudevgupta7/huggingface-tutorials/blob/main/translation_training.ipynb)|
|[Fine-tuning di LayoutLM su FUNSD (un dataset per la comprensione della forma)](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/LayoutLM/Fine_tuning_LayoutLMForTokenClassification_on_FUNSD.ipynb) | Come effettuare il fine-tuning di un modello *LayoutLMForTokenClassification* sul dataset FUNSD per l'estrazione di informazioni da documenti scannerizzati.| [Niels Rogge](https://github.com/nielsrogge) | [](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/LayoutLM/Fine_tuning_LayoutLMForTokenClassification_on_FUNSD.ipynb)|
|[Fine-tuning di DistilGPT2 e generazione di testo](https://colab.research.google.com/github/tripathiaakash/DistilGPT2-Tutorial/blob/main/distilgpt2_fine_tuning.ipynb) | Come effettuare il fine-tuning di DistilGPT2 e generare testo. | [Aakash Tripathi](https://github.com/tripathiaakash) | [](https://colab.research.google.com/github/tripathiaakash/DistilGPT2-Tutorial/blob/main/distilgpt2_fine_tuning.ipynb)|
|[Fine-tuning di LED fino a 8 mila token](https://github.com/patrickvonplaten/notebooks/blob/master/Fine_tune_Longformer_Encoder_Decoder_(LED)_for_Summarization_on_pubmed.ipynb) | Come effettuare il fine-tuning di LED su PubMed per riassumere "lunghi" testi. | [Patrick von Platen](https://github.com/patrickvonplaten) | [](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/Fine_tune_Longformer_Encoder_Decoder_(LED)_for_Summarization_on_pubmed.ipynb)|
|[Valutazione di LED su Arxiv](https://github.com/patrickvonplaten/notebooks/blob/master/LED_on_Arxiv.ipynb) | Come valutare efficacemente LED sull'attività di riassumere "lunghi" testi. | [Patrick von Platen](https://github.com/patrickvonplaten) | [](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/LED_on_Arxiv.ipynb)|
|[Fine-tuning di LayoutLM su RVL-CDIP, un dataset per la classificazione di documenti (immagini)](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/LayoutLM/Fine_tuning_LayoutLMForSequenceClassification_on_RVL_CDIP.ipynb) | Come effettuare il fine-tuning di un modello *LayoutLMForSequenceClassification* sul dataset RVL-CDIP per la classificazione di documenti scannerizzati. | [Niels Rogge](https://github.com/nielsrogge) | [](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/LayoutLM/Fine_tuning_LayoutLMForSequenceClassification_on_RVL_CDIP.ipynb)|
|[Decodifica Wav2Vec2 CTC con variazioni di GPT2](https://github.com/voidful/huggingface_notebook/blob/main/xlsr_gpt.ipynb) | Come decodificare sequenze CTC, variate da modelli di linguaggio. | [Eric Lam](https://github.com/voidful) | [](https://colab.research.google.com/drive/1e_z5jQHYbO2YKEaUgzb1ww1WwiAyydAj?usp=sharing)
|[Fine-tuning di BART per riassumere testi in due lingue con la classe Trainer](https://github.com/elsanns/xai-nlp-notebooks/blob/master/fine_tune_bart_summarization_two_langs.ipynb) | Come effettuare il fine-tuning di BART per riassumere testi in due lingue usando la classe Trainer. | [Eliza Szczechla](https://github.com/elsanns) | [](https://colab.research.google.com/github/elsanns/xai-nlp-notebooks/blob/master/fine_tune_bart_summarization_two_langs.ipynb)|
|[Valutazione di Big Bird su Trivia QA](https://github.com/patrickvonplaten/notebooks/blob/master/Evaluating_Big_Bird_on_TriviaQA.ipynb) | Come valutare BigBird su question answering di "lunghi" documenti attraverso Trivia QA. | [Patrick von Platen](https://github.com/patrickvonplaten) | [](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/Evaluating_Big_Bird_on_TriviaQA.ipynb)|
| [Creazione di sottotitoli per video usando Wav2Vec2](https://github.com/Muennighoff/ytclipcc/blob/main/wav2vec_youtube_captions.ipynb) | Come creare sottotitoli per qualsiasi video di YouTube trascrivendo l'audio con Wav2Vec. | [Niklas Muennighoff](https://github.com/Muennighoff) |[](https://colab.research.google.com/github/Muennighoff/ytclipcc/blob/main/wav2vec_youtube_captions.ipynb) |
| [Fine-tuning di Vision Transformer su CIFAR-10 usando PyTorch Lightning](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/VisionTransformer/Fine_tuning_the_Vision_Transformer_on_CIFAR_10_with_PyTorch_Lightning.ipynb) | Come effettuare il fine-tuning di Vision Transformer (ViT) su CIFAR-10 usando HuggingFace Transformers, Datasets e PyTorch Lightning.| [Niels Rogge](https://github.com/nielsrogge) |[](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/VisionTransformer/Fine_tuning_the_Vision_Transformer_on_CIFAR_10_with_PyTorch_Lightning.ipynb) |
| [Fine-tuning di Vision Transformer su CIFAR-10 usando ð€ Trainer](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/VisionTransformer/Fine_tuning_the_Vision_Transformer_on_CIFAR_10_with_the_%F0%9F%A4%97_Trainer.ipynb) | Come effettuare il fine-tuning di Vision Transformer (ViT) su CIFAR-10 usando HuggingFace Transformers, Datasets e ð€ Trainer. | [Niels Rogge](https://github.com/nielsrogge) |[](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/VisionTransformer/Fine_tuning_the_Vision_Transformer_on_CIFAR_10_with_the_%F0%9F%A4%97_Trainer.ipynb) |
| [Valutazione di LUKE su Open Entity, un dataset di entity typing](https://github.com/studio-ousia/luke/blob/master/notebooks/huggingface_open_entity.ipynb) | Come valutare un modello *LukeForEntityClassification* sul dataset Open Entity. | [Ikuya Yamada](https://github.com/ikuyamada) |[](https://colab.research.google.com/github/studio-ousia/luke/blob/master/notebooks/huggingface_open_entity.ipynb) |
| [Valutazione di LUKE su TACRED, un dataset per l'estrazione di relazioni](https://github.com/studio-ousia/luke/blob/master/notebooks/huggingface_tacred.ipynb) | Come valutare un modello *LukeForEntityPairClassification* sul dataset TACRED. | [Ikuya Yamada](https://github.com/ikuyamada) |[](https://colab.research.google.com/github/studio-ousia/luke/blob/master/notebooks/huggingface_tacred.ipynb) |
| [Valutazione di LUKE su CoNLL-2003, un importante benchmark NER](https://github.com/studio-ousia/luke/blob/master/notebooks/huggingface_conll_2003.ipynb) | Come valutare un modello *LukeForEntitySpanClassification* sul dataset CoNLL-2003. | [Ikuya Yamada](https://github.com/ikuyamada) |[](https://colab.research.google.com/github/studio-ousia/luke/blob/master/notebooks/huggingface_conll_2003.ipynb) |
| [Valutazione di BigBird-Pegasus su dataset PubMed](https://github.com/vasudevgupta7/bigbird/blob/main/notebooks/bigbird_pegasus_evaluation.ipynb) | Come valutare un modello *BigBirdPegasusForConditionalGeneration* su dataset PubMed. | [Vasudev Gupta](https://github.com/vasudevgupta7) | [](https://colab.research.google.com/github/vasudevgupta7/bigbird/blob/main/notebooks/bigbird_pegasus_evaluation.ipynb) |
| [Classificazione di emozioni dal discorso con Wav2Vec2](https://github/m3hrdadfi/soxan/blob/main/notebooks/Emotion_recognition_in_Greek_speech_using_Wav2Vec2.ipynb) | Come utilizzare un modello pre-addestrato Wav2Vec2 per la classificazione di emozioni sul dataset MEGA. | [Mehrdad Farahani](https://github.com/m3hrdadfi) | [](https://colab.research.google.com/github/m3hrdadfi/soxan/blob/main/notebooks/Emotion_recognition_in_Greek_speech_using_Wav2Vec2.ipynb) |
| [Rilevamento oggetti in un'immagine con DETR](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/DETR/DETR_minimal_example_(with_DetrFeatureExtractor).ipynb) | Come usare un modello addestrato *DetrForObjectDetection* per rilevare oggetti in un'immagine e visualizzare l'attention. | [Niels Rogge](https://github.com/NielsRogge) | [](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/DETR/DETR_minimal_example_(with_DetrFeatureExtractor).ipynb) |
| [Fine-tuning di DETR su un dataset personalizzato per rilevare oggetti](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/DETR/Fine_tuning_DetrForObjectDetection_on_custom_dataset_(balloon).ipynb) | Come effettuare fine-tuning di un modello *DetrForObjectDetection* su un dataset personalizzato per rilevare oggetti. | [Niels Rogge](https://github.com/NielsRogge) | [](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/DETR/Fine_tuning_DetrForObjectDetection_on_custom_dataset_(balloon).ipynb) |
| [Fine-tuning di T5 per Named Entity Recognition](https://github.com/ToluClassics/Notebooks/blob/main/T5_Ner_Finetuning.ipynb) | Come effettuare fine-tunining di *T5* per un'attività di Named Entity Recognition. | [Ogundepo Odunayo](https://github.com/ToluClassics) | [](https://colab.research.google.com/drive/1obr78FY_cBmWY5ODViCmzdY6O1KB65Vc?usp=sharing) |
| 0 |
hf_public_repos/transformers/docs/source | hf_public_repos/transformers/docs/source/it/serialization.md | <!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Esporta modelli ð€ Transformers
Se devi implementare ð€ modelli Transformers in ambienti di produzione, noi
consigliamo di esportarli in un formato serializzato che può essere caricato ed eseguito
su runtime e hardware specializzati. In questa guida ti mostreremo come farlo
esporta ð€ Modelli Transformers in due formati ampiamente utilizzati: ONNX e TorchScript.
Una volta esportato, un modello può essere ottimizato per l'inferenza tramite tecniche come
la quantizzazione e soppressione. Se sei interessato a ottimizzare i tuoi modelli per l'esecuzione
con la massima efficienza, dai un'occhiata a [ð€ Optimum
library](https://github.com/huggingface/optimum).
## ONNX
Il progetto [ONNX (Open Neural Network eXchange)](http://onnx.ai) Il progetto onnx Ú un open
standard che definisce un insieme comune di operatori e un formato di file comune a
rappresentano modelli di deep learning in un'ampia varietà di framework, tra cui
PyTorch e TensorFlow. Quando un modello viene esportato nel formato ONNX, questi
operatori sono usati per costruire un grafico computazionale (often called an
_intermediate representation_) che rappresenta il flusso di dati attraverso la
rete neurale.
Esponendo un grafico con operatori e tipi di dati standardizzati, ONNX rende
più facile passare da un framework all'altro. Ad esempio, un modello allenato in PyTorch può
essere esportato in formato ONNX e quindi importato in TensorFlow (e viceversa).
ð€ Transformers fornisce un pacchetto `transformers.onnx` che ti consente di
convertire i checkpoint del modello in un grafico ONNX sfruttando gli oggetti di configurazione.
Questi oggetti di configurazione sono già pronti per una serie di architetture di modelli,
e sono progettati per essere facilmente estensibili ad altre architetture.
Le configurazioni pronte includono le seguenti architetture:
<!--This table is automatically generated by `make fix-copies`, do not fill manually!-->
- ALBERT
- BART
- BEiT
- BERT
- BigBird
- BigBird-Pegasus
- Blenderbot
- BlenderbotSmall
- CamemBERT
- ConvBERT
- Data2VecText
- Data2VecVision
- DeiT
- DistilBERT
- ELECTRA
- FlauBERT
- GPT Neo
- GPT-J
- I-BERT
- LayoutLM
- M2M100
- Marian
- mBART
- MobileBERT
- OpenAI GPT-2
- Perceiver
- PLBart
- RoBERTa
- RoFormer
- SqueezeBERT
- T5
- ViT
- XLM
- XLM-RoBERTa
- XLM-RoBERTa-XL
Nelle prossime due sezioni, ti mostreremo come:
* Esporta un modello supportato usando il pacchetto `transformers.onnx`.
* Esporta un modello personalizzato per un'architettura non supportata.
### Esportazione di un modello in ONNX
Per esportare un modello ð€ Transformers in ONNX, dovrai prima installarne alcune
dipendenze extra:
```bash
pip install transformers[onnx]
```
Il pacchetto `transformers.onnx` può essere usato come modulo Python:
```bash
python -m transformers.onnx --help
usage: Hugging Face Transformers ONNX exporter [-h] -m MODEL [--feature {causal-lm, ...}] [--opset OPSET] [--atol ATOL] output
positional arguments:
output Path indicating where to store generated ONNX model.
optional arguments:
-h, --help show this help message and exit
-m MODEL, --model MODEL
Model ID on huggingface.co or path on disk to load model from.
--feature {causal-lm, ...}
The type of features to export the model with.
--opset OPSET ONNX opset version to export the model with.
--atol ATOL Absolute difference tolerance when validating the model.
```
L'esportazione di un checkpoint utilizzando una configurazione già pronta può essere eseguita come segue:
```bash
python -m transformers.onnx --model=distilbert-base-uncased onnx/
```
che dovrebbe mostrare i seguenti log:
```bash
Validating ONNX model...
-[â] ONNX model output names match reference model ({'last_hidden_state'})
- Validating ONNX Model output "last_hidden_state":
-[â] (2, 8, 768) matches (2, 8, 768)
-[â] all values close (atol: 1e-05)
All good, model saved at: onnx/model.onnx
```
Questo esporta un grafico ONNX del checkpoint definito dall'argomento `--model`.
In questo esempio Ú `distilbert-base-uncased`, ma può essere qualsiasi checkpoint
Hugging Face Hub o uno memorizzato localmente.
Il file risultante `model.onnx` può quindi essere eseguito su uno dei [tanti
acceleratori](https://onnx.ai/supported-tools.html#deployModel) che supportano il
lo standard ONNX. Ad esempio, possiamo caricare ed eseguire il modello con [ONNX
Runtime](https://onnxruntime.ai/) come segue:
```python
>>> from transformers import AutoTokenizer
>>> from onnxruntime import InferenceSession
>>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased")
>>> session = InferenceSession("onnx/model.onnx")
>>> # ONNX Runtime expects NumPy arrays as input
>>> inputs = tokenizer("Using DistilBERT with ONNX Runtime!", return_tensors="np")
>>> outputs = session.run(output_names=["last_hidden_state"], input_feed=dict(inputs))
```
I nomi di output richiesti (cioÚ `["last_hidden_state"]`) possono essere ottenuti
dando un'occhiata alla configurazione ONNX di ogni modello. Ad esempio, per
DistilBERT abbiamo:
```python
>>> from transformers.models.distilbert import DistilBertConfig, DistilBertOnnxConfig
>>> config = DistilBertConfig()
>>> onnx_config = DistilBertOnnxConfig(config)
>>> print(list(onnx_config.outputs.keys()))
["last_hidden_state"]
```
Il processo Ú identico per i checkpoint TensorFlow sull'hub. Ad esempio, noi
possiamo esportare un checkpoint TensorFlow puro da [Keras
organizzazione](https://huggingface.co/keras-io) come segue:
```bash
python -m transformers.onnx --model=keras-io/transformers-qa onnx/
```
Per esportare un modello memorizzato localmente, devi disporre dei pesi del modello
e file tokenizer memorizzati in una directory. Ad esempio, possiamo caricare e salvare un
checkpoint come segue:
<frameworkcontent>
<pt>
```python
>>> from transformers import AutoTokenizer, AutoModelForSequenceClassification
>>> # Load tokenizer and PyTorch weights form the Hub
>>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased")
>>> pt_model = AutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased")
>>> # Save to disk
>>> tokenizer.save_pretrained("local-pt-checkpoint")
>>> pt_model.save_pretrained("local-pt-checkpoint")
```
Una volta salvato il checkpoint, possiamo esportarlo su ONNX puntando l'argomento `--model`
del pacchetto `transformers.onnx` nella directory desiderata:
```bash
python -m transformers.onnx --model=local-pt-checkpoint onnx/
```
</pt>
<tf>
```python
>>> from transformers import AutoTokenizer, TFAutoModelForSequenceClassification
>>> # Load tokenizer and TensorFlow weights from the Hub
>>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased")
>>> tf_model = TFAutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased")
>>> # Save to disk
>>> tokenizer.save_pretrained("local-tf-checkpoint")
>>> tf_model.save_pretrained("local-tf-checkpoint")
```
Once the checkpoint is saved, we can export it to ONNX by pointing the `--model`
argument of the `transformers.onnx` package to the desired directory:
```bash
python -m transformers.onnx --model=local-tf-checkpoint onnx/
```
</tf>
</frameworkcontent>
### Selezione delle caratteristiche per diverse topologie di modello
Ogni configurazione già pronta viene fornita con una serie di _caratteristiche_ che ti consentono di
esportare modelli per diversi tipi di topologie o attività . Come mostrato nella tabella
di seguito, ogni caratteristica Ú associata a una diversa Auto Class:
| Caratteristica | Auto Class |
| ------------------------------------ | ------------------------------------ |
| `causal-lm`, `causal-lm-with-past` | `AutoModelForCausalLM` |
| `default`, `default-with-past` | `AutoModel` |
| `masked-lm` | `AutoModelForMaskedLM` |
| `question-answering` | `AutoModelForQuestionAnswering` |
| `seq2seq-lm`, `seq2seq-lm-with-past` | `AutoModelForSeq2SeqLM` |
| `sequence-classification` | `AutoModelForSequenceClassification` |
| `token-classification` | `AutoModelForTokenClassification` |
Per ciascuna configurazione, puoi trovare l'elenco delle funzionalità supportate tramite il
`FeaturesManager`. Ad esempio, per DistilBERT abbiamo:
```python
>>> from transformers.onnx.features import FeaturesManager
>>> distilbert_features = list(FeaturesManager.get_supported_features_for_model_type("distilbert").keys())
>>> print(distilbert_features)
["default", "masked-lm", "causal-lm", "sequence-classification", "token-classification", "question-answering"]
```
Puoi quindi passare una di queste funzionalità all'argomento `--feature` nel
pacchetto `transformers.onnx`. Ad esempio, per esportare un modello di classificazione del testo
possiamo scegliere un modello ottimizzato dall'Hub ed eseguire:
```bash
python -m transformers.onnx --model=distilbert-base-uncased-finetuned-sst-2-english \
--feature=sequence-classification onnx/
```
che visualizzerà i seguenti registri:
```bash
Validating ONNX model...
-[â] ONNX model output names match reference model ({'logits'})
- Validating ONNX Model output "logits":
-[â] (2, 2) matches (2, 2)
-[â] all values close (atol: 1e-05)
All good, model saved at: onnx/model.onnx
```
Puoi notare che in questo caso, i nomi di output del modello ottimizzato sono
`logits` invece di `last_hidden_state` che abbiamo visto con il
checkpoint `distilbert-base-uncased` precedente. Questo Ú previsto dal
modello ottimizato visto che ha una testa di e.
<Tip>
Le caratteristiche che hanno un suffisso `wtih-past` (ad es. `causal-lm-with-past`)
corrispondono a topologie di modello con stati nascosti precalcolati (chiave e valori
nei blocchi di attenzione) che possono essere utilizzati per la decodifica autoregressiva veloce.
</Tip>
### Esportazione di un modello per un'architettura non supportata
Se desideri esportare un modello la cui architettura non Ú nativamente supportata dalla
libreria, ci sono tre passaggi principali da seguire:
1. Implementare una configurazione ONNX personalizzata.
2. Esportare il modello in ONNX.
3. Convalidare gli output di PyTorch e dei modelli esportati.
In questa sezione, vedremo come DistilBERT Ú stato implementato per mostrare cosa Ú
coinvolto in ogni passaggio.
#### Implementazione di una configurazione ONNX personalizzata
Iniziamo con l'oggetto di configurazione ONNX. Forniamo tre classi
astratte da cui ereditare, a seconda del tipo di archittettura
del modello che desideri esportare:
* I modelli basati su encoder ereditano da [`~onnx.config.OnnxConfig`]
* I modelli basati su decoder ereditano da [`~onnx.config.OnnxConfigWithPast`]
* I modelli encoder-decoder ereditano da[`~onnx.config.OnnxSeq2SeqConfigWithPast`]
<Tip>
Un buon modo per implementare una configurazione ONNX personalizzata Ú guardare l'implementazione
esistente nel file `configuration_<model_name>.py` di un'architettura simile.
</Tip>
Poiché DistilBERT Ú un modello basato su encoder, la sua configurazione eredita da
`OnnxConfig`:
```python
>>> from typing import Mapping, OrderedDict
>>> from transformers.onnx import OnnxConfig
>>> class DistilBertOnnxConfig(OnnxConfig):
... @property
... def inputs(self) -> Mapping[str, Mapping[int, str]]:
... return OrderedDict(
... [
... ("input_ids", {0: "batch", 1: "sequence"}),
... ("attention_mask", {0: "batch", 1: "sequence"}),
... ]
... )
```
Ogni oggetto di configurazione deve implementare la proprietà `inputs` e restituire una
mappatura, dove ogni chiave corrisponde a un input previsto e ogni valore
indica l'asse di quell'input. Per DistilBERT, possiamo vedere che sono richiesti
due input: `input_ids` e `attention_mask`. Questi inputs hanno la stessa forma di
`(batch_size, sequence_length)` per questo motivo vediamo gli stessi assi usati nella
configurazione.
<Tip>
Puoi notare che la proprietà `inputs` per `DistilBertOnnxConfig` restituisce un
`OrdinatoDict`. Ciò garantisce che gli input corrispondano alla loro posizione
relativa all'interno del metodo `PreTrainedModel.forward()` durante il tracciamento del grafico.
Raccomandiamo di usare un `OrderedDict` per le proprietà `inputs` e `outputs`
quando si implementano configurazioni ONNX personalizzate.
</Tip>
Dopo aver implementato una configurazione ONNX, Ú possibile istanziarla
fornendo alla configurazione del modello base come segue:
```python
>>> from transformers import AutoConfig
>>> config = AutoConfig.from_pretrained("distilbert-base-uncased")
>>> onnx_config = DistilBertOnnxConfig(config)
```
L'oggetto risultante ha diverse proprietà utili. Ad esempio Ú possibile visualizzare il
Set operatore ONNX che verrà utilizzato durante l'esportazione:
```python
>>> print(onnx_config.default_onnx_opset)
11
```
à inoltre possibile visualizzare gli output associati al modello come segue:
```python
>>> print(onnx_config.outputs)
OrderedDict([("last_hidden_state", {0: "batch", 1: "sequence"})])
```
Puoi notare che la proprietà degli output segue la stessa struttura degli input; esso
restituisce un `OrderedDict` di output con nome e le loro forme. La struttura di output
Ú legato alla scelta della funzione con cui viene inizializzata la configurazione.
Per impostazione predefinita, la configurazione ONNX viene inizializzata con la funzione 'predefinita'
che corrisponde all'esportazione di un modello caricato con la classe `AutoModel`. Se tu
desideri esportare una topologia di modello diversa, Ú sufficiente fornire una funzionalità diversa a
l'argomento `task` quando inizializzi la configurazione ONNX. Ad esempio, se
volevamo esportare DistilBERT con una testa di classificazione per sequenze, potremmo
usare:
```python
>>> from transformers import AutoConfig
>>> config = AutoConfig.from_pretrained("distilbert-base-uncased")
>>> onnx_config_for_seq_clf = DistilBertOnnxConfig(config, task="sequence-classification")
>>> print(onnx_config_for_seq_clf.outputs)
OrderedDict([('logits', {0: 'batch'})])
```
<Tip>
Tutte le proprietà e i metodi di base associati a [`~onnx.config.OnnxConfig`] e le
altre classi di configurazione possono essere sovrascritte se necessario. Guarda
[`BartOnnxConfig`] per un esempio avanzato.
</Tip>
#### Esportazione del modello
Una volta implementata la configurazione ONNX, il passaggio successivo consiste nell'esportare il
modello. Qui possiamo usare la funzione `export()` fornita dal
pacchetto `transformers.onnx`. Questa funzione prevede la configurazione ONNX, insieme
con il modello base e il tokenizer e il percorso per salvare il file esportato:
```python
>>> from pathlib import Path
>>> from transformers.onnx import export
>>> from transformers import AutoTokenizer, AutoModel
>>> onnx_path = Path("model.onnx")
>>> model_ckpt = "distilbert-base-uncased"
>>> base_model = AutoModel.from_pretrained(model_ckpt)
>>> tokenizer = AutoTokenizer.from_pretrained(model_ckpt)
>>> onnx_inputs, onnx_outputs = export(tokenizer, base_model, onnx_config, onnx_config.default_onnx_opset, onnx_path)
```
Gli `onnx_inputs` e `onnx_outputs` restituiti dalla funzione `export()` sono
liste di chiavi definite nelle proprietà di `input` e `output` della
configurazione. Una volta esportato il modello, puoi verificare che il modello sia ben
formato come segue:
```python
>>> import onnx
>>> onnx_model = onnx.load("model.onnx")
>>> onnx.checker.check_model(onnx_model)
```
<Tip>
Se il tuo modello Ú più largo di 2 GB, vedrai che molti file aggiuntivi sono
creati durante l'esportazione. Questo Ú _previsto_ perché ONNX utilizza [Protocol
Buffer](https://developers.google.com/protocol-buffers/) per memorizzare il modello e
questi hanno un limite di dimensione 2 GB. Vedi la [Documentazione
ONNX](https://github.com/onnx/onnx/blob/master/docs/ExternalData.md)
per istruzioni su come caricare modelli con dati esterni.
</Tip>
#### Convalida degli output del modello
Il passaggio finale consiste nel convalidare gli output dal modello di base e quello esportato
corrispondere entro una soglia di tolleranza assoluta. Qui possiamo usare la
Funzione `validate_model_outputs()` fornita dal pacchetto `transformers.onnx`
come segue:
```python
>>> from transformers.onnx import validate_model_outputs
>>> validate_model_outputs(
... onnx_config, tokenizer, base_model, onnx_path, onnx_outputs, onnx_config.atol_for_validation
... )
```
Questa funzione usa il metodo `OnnxConfig.generate_dummy_inputs()` per generare
input per il modello di base e quello esportato e la tolleranza assoluta può essere
definita nella configurazione. Generalmente troviamo una corrispondenza numerica nell'intervallo da 1e-6
a 1e-4, anche se Ú probabile che qualsiasi cosa inferiore a 1e-3 vada bene.
### Contribuire con una nuova configurazione a ð€ Transformers
Stiamo cercando di espandere l'insieme di configurazioni già pronte e di accettare
contributi della community! Se vuoi contribuire con la tua aggiunta
nella libreria, dovrai:
* Implementare la configurazione ONNX nella corrispondente `configuration file
_<model_name>.py`
* Includere l'architettura del modello e le funzioni corrispondenti in [`~onnx.features.FeatureManager`]
* Aggiungere la tua architettura del modello ai test in `test_onnx_v2.py`
Scopri come stato contribuito la configurazione per [IBERT]
(https://github.com/huggingface/transformers/pull/14868/files) per
avere un'idea di cosa Ú coinvolto.
## TorchScript
<Tip>
Questo Ú l'inizio dei nostri esperimenti con TorchScript e stiamo ancora esplorando le sue capacità con
modelli con variable-input-size. à una nostra priorità e approfondiremo le nostre analisi nelle prossime versioni,
con più esempi di codici, un'implementazione più flessibile e benchmark che confrontano i codici basati su Python con quelli compilati con
TorchScript.
</Tip>
Secondo la documentazione di Pytorch: "TorchScript Ú un modo per creare modelli serializzabili e ottimizzabili da codice
Pytorch". I due moduli di Pytorch [JIT e TRACE](https://pytorch.org/docs/stable/jit.html) consentono allo sviluppatore di esportare
il loro modello da riutilizzare in altri programmi, come i programmi C++ orientati all'efficienza.
Abbiamo fornito un'interfaccia che consente l'esportazione di modelli ð€ Transformers in TorchScript in modo che possano essere riutilizzati
in un ambiente diverso rispetto a un programma Python basato su Pytorch. Qui spieghiamo come esportare e utilizzare i nostri modelli utilizzando
TorchScript.
Esportare un modello richiede due cose:
- Un passaggio in avanti con input fittizzi.
- Istanziazione del modello con flag `torchscript`.
Queste necessità implicano diverse cose a cui gli sviluppatori dovrebbero prestare attenzione. Questi dettagli mostrati sotto.
### Flag TorchScript e pesi legati
Questo flag Ú necessario perché la maggior parte dei modelli linguistici in questo repository hanno pesi legati tra il loro
strato "Embedding" e lo strato "Decoding". TorchScript non consente l'esportazione di modelli che hanno pesi
legati, quindi Ú necessario prima slegare e clonare i pesi.
Ciò implica che i modelli istanziati con il flag `torchscript` hanno il loro strato `Embedding` e strato `Decoding`
separato, il che significa che non dovrebbero essere addestrati in futuro. L'allenamento de-sincronizza i due
strati, portando a risultati inaspettati.
Questo non Ú il caso per i modelli che non hanno una testa del modello linguistico, poiché quelli non hanno pesi legati. Questi modelli
può essere esportato in sicurezza senza il flag `torchscript`.
### Input fittizi e standard lengths
Gli input fittizzi sono usati per fare un modello passaggio in avanti . Mentre i valori degli input si propagano attraverso i strati,
Pytorch tiene traccia delle diverse operazioni eseguite su ciascun tensore. Queste operazioni registrate vengono quindi utilizzate per
creare la "traccia" del modello.
La traccia viene creata relativamente alle dimensioni degli input. Ã quindi vincolato dalle dimensioni dell'input
fittizio e non funzionerà per altre lunghezze di sequenza o dimensioni batch. Quando si proverà con una dimensione diversa, ci sarà errore
come:
`La dimensione espansa del tensore (3) deve corrispondere alla dimensione esistente (7) nella dimensione non singleton 2`
will be raised. Si consiglia pertanto di tracciare il modello con una dimensione di input fittizia grande almeno quanto il più grande
input che verrà fornito al modello durante l'inferenza. à possibile eseguire il padding per riempire i valori mancanti. Il modello
sarà tracciato con una grande dimensione di input, tuttavia, anche le dimensioni della diverse matrici saranno grandi,
risultando in più calcoli.
Si raccomanda di prestare attenzione al numero totale di operazioni eseguite su ciascun input e di seguire da vicino le prestazioni
durante l'esportazione di modelli di sequenza-lunghezza variabili.
### Usare TorchSscript in Python
Di seguito Ú riportato un esempio, che mostra come salvare, caricare modelli e come utilizzare la traccia per l'inferenza.
#### Salvare un modello
Questo frammento di codice mostra come usare TorchScript per esportare un `BertModel`. Qui il `BertModel` Ú istanziato secondo
una classe `BertConfig` e quindi salvato su disco con il nome del file `traced_bert.pt`
```python
from transformers import BertModel, BertTokenizer, BertConfig
import torch
enc = BertTokenizer.from_pretrained("bert-base-uncased")
# Tokenizing input text
text = "[CLS] Who was Jim Henson ? [SEP] Jim Henson was a puppeteer [SEP]"
tokenized_text = enc.tokenize(text)
# Masking one of the input tokens
masked_index = 8
tokenized_text[masked_index] = "[MASK]"
indexed_tokens = enc.convert_tokens_to_ids(tokenized_text)
segments_ids = [0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1]
# Creating a dummy input
tokens_tensor = torch.tensor([indexed_tokens])
segments_tensors = torch.tensor([segments_ids])
dummy_input = [tokens_tensor, segments_tensors]
# Initializing the model with the torchscript flag
# Flag set to True even though it is not necessary as this model does not have an LM Head.
config = BertConfig(
vocab_size_or_config_json_file=32000,
hidden_size=768,
num_hidden_layers=12,
num_attention_heads=12,
intermediate_size=3072,
torchscript=True,
)
# Instantiating the model
model = BertModel(config)
# The model needs to be in evaluation mode
model.eval()
# If you are instantiating the model with *from_pretrained* you can also easily set the TorchScript flag
model = BertModel.from_pretrained("bert-base-uncased", torchscript=True)
# Creating the trace
traced_model = torch.jit.trace(model, [tokens_tensor, segments_tensors])
torch.jit.save(traced_model, "traced_bert.pt")
```
#### Caricare un modello
Questo frammento di codice mostra come caricare il `BertModel` che era stato precedentemente salvato su disco con il nome `traced_bert.pt`.
Stiamo riutilizzando il `dummy_input` precedentemente inizializzato.
```python
loaded_model = torch.jit.load("traced_bert.pt")
loaded_model.eval()
all_encoder_layers, pooled_output = loaded_model(*dummy_input)
```
#### Utilizzare un modello tracciato per l'inferenza
Usare il modello tracciato per l'inferenza Ú semplice come usare il suo metodo dunder `__call__`:
```python
traced_model(tokens_tensor, segments_tensors)
```
###Implementare modelli HuggingFace TorchScript su AWS utilizzando Neuron SDK
AWS ha introdotto [Amazon EC2 Inf1](https://aws.amazon.com/ec2/instance-types/inf1/)
famiglia di istanze per l'inferenza di machine learning a basso costo e ad alte prestazioni nel cloud.
Le istanze Inf1 sono alimentate dal chip AWS Inferentia, un acceleratore hardware personalizzato,
specializzato in carichi di lavoro di inferenza di deep learning.
[AWS Neuron](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/#)
Ú l'SDK per Inferentia che supporta il tracciamento e l'ottimizzazione dei modelli transformers per
distribuzione su Inf1. L'SDK Neuron fornisce:
1. API di facile utilizzo con una riga di modifica del codice per tracciare e ottimizzare un modello TorchScript per l'inferenza nel cloud.
2. Ottimizzazioni delle prestazioni pronte all'uso per [miglioramento dei costi-prestazioni](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-guide/benchmark/>)
3. Supporto per i modelli di trasformatori HuggingFace costruiti con [PyTorch](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/src/examples/pytorch/bert_tutorial/tutorial_pretrained_bert.html)
o [TensorFlow](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/src/examples/tensorflow/huggingface_bert/huggingface_bert.html).
#### Implicazioni
Modelli Transformers basati su architettura [BERT (Bidirectional Encoder Representations from Transformers)](https://huggingface.co/docs/transformers/main/model_doc/bert),
o sue varianti come [distilBERT](https://huggingface.co/docs/transformers/main/model_doc/distilbert)
e [roBERTa](https://huggingface.co/docs/transformers/main/model_doc/roberta)
funzioneranno meglio su Inf1 per attività non generative come la question answering estrattive,
Classificazione della sequenza, Classificazione dei token. In alternativa, generazione di testo
le attività possono essere adattate per essere eseguite su Inf1, secondo questo [tutorial AWS Neuron MarianMT](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/src/examples/pytorch/transformers-marianmt.html).
Ulteriori informazioni sui modelli che possono essere convertiti fuori dagli schemi su Inferentia possono essere
trovati nella [sezione Model Architecture Fit della documentazione Neuron](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-guide/models/models-inferentia.html#models-inferentia).
#### Dipendenze
L'utilizzo di AWS Neuron per convertire i modelli richiede le seguenti dipendenze e l'ambiente:
* A [Neuron SDK environment](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-guide/neuron-frameworks/pytorch-neuron/index.html#installation-guide),
which comes pre-configured on [AWS Deep Learning AMI](https://docs.aws.amazon.com/dlami/latest/devguide/tutorial-inferentia-launching.html).
#### Convertire un modello per AWS Neuron
Usando lo stesso script come in [Usando TorchScipt in Python](https://huggingface.co/docs/transformers/main/en/serialization#using-torchscript-in-python)
per tracciare un "BertModel", importi l'estensione del framework `torch.neuron` per accedere
i componenti di Neuron SDK tramite un'API Python.
```python
from transformers import BertModel, BertTokenizer, BertConfig
import torch
import torch.neuron
```
E modificare solo la riga di codice di traccia
Da:
```python
torch.jit.trace(model, [tokens_tensor, segments_tensors])
```
A:
```python
torch.neuron.trace(model, [token_tensor, segments_tensors])
```
Questa modifica consente a Neuron SDK di tracciare il modello e ottimizzarlo per l'esecuzione nelle istanze Inf1.
Per ulteriori informazioni sulle funzionalità , gli strumenti, i tutorial di esempi e gli ultimi aggiornamenti di AWS Neuron SDK,
consultare la [documentazione AWS NeuronSDK](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/index.html). | 0 |
hf_public_repos/transformers/docs/source | hf_public_repos/transformers/docs/source/it/training.md | <!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Fine-tuning di un modello pre-addestrato
[[open-in-colab]]
Ci sono benefici significativi nell'usare un modello pre-addestrato. Si riducono i costi computazionali, l'impronta di carbonio e ti consente di usare modelli stato dell'arte senza doverli addestrare da zero. ð€ Transformers consente l'accesso a migliaia di modelli pre-addestrati per un'ampia gamma di compiti. Quando usi un modello pre-addestrato, lo alleni su un dataset specifico per il tuo compito. Questo Ú conosciuto come fine-tuning, una tecnica di addestramento incredibilmente potente. In questa esercitazione, potrai fare il fine-tuning di un modello pre-addestrato, con un framework di deep learning a tua scelta:
* Fine-tuning di un modello pre-addestrato con ð€ Transformers [`Trainer`].
* Fine-tuning di un modello pre-addestrato in TensorFlow con Keras.
* Fine-tuning di un modello pre-addestrato con PyTorch.
<a id='data-processing'></a>
## Preparare un dataset
<Youtube id="_BZearw7f0w"/>
Prima di poter fare il fine-tuning di un modello pre-addestrato, scarica un dataset e preparalo per l'addestramento. La precedente esercitazione ti ha mostrato come processare i dati per l'addestramento e adesso hai l'opportunità di metterti alla prova!
Inizia caricando il dataset [Yelp Reviews](https://huggingface.co/datasets/yelp_review_full):
```py
>>> from datasets import load_dataset
>>> dataset = load_dataset("yelp_review_full")
>>> dataset["train"][100]
{'label': 0,
'text': 'My expectations for McDonalds are t rarely high. But for one to still fail so spectacularly...that takes something special!\\nThe cashier took my friends\'s order, then promptly ignored me. I had to force myself in front of a cashier who opened his register to wait on the person BEHIND me. I waited over five minutes for a gigantic order that included precisely one kid\'s meal. After watching two people who ordered after me be handed their food, I asked where mine was. The manager started yelling at the cashiers for \\"serving off their orders\\" when they didn\'t have their food. But neither cashier was anywhere near those controls, and the manager was the one serving food to customers and clearing the boards.\\nThe manager was rude when giving me my order. She didn\'t make sure that I had everything ON MY RECEIPT, and never even had the decency to apologize that I felt I was getting poor service.\\nI\'ve eaten at various McDonalds restaurants for over 30 years. I\'ve worked at more than one location. I expect bad days, bad moods, and the occasional mistake. But I have yet to have a decent experience at this store. It will remain a place I avoid unless someone in my party needs to avoid illness from low blood sugar. Perhaps I should go back to the racially biased service of Steak n Shake instead!'}
```
Come già sai, hai bisogno di un tokenizer per processare il testo e includere una strategia di padding e truncation per gestire sequenze di lunghezza variabile. Per processare il dataset in un unico passo, usa il metodo [`map`](https://huggingface.co/docs/datasets/process#map) di ð€ Datasets che applica la funzione di preprocessing all'intero dataset:
```py
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
>>> def tokenize_function(examples):
... return tokenizer(examples["text"], padding="max_length", truncation=True)
>>> tokenized_datasets = dataset.map(tokenize_function, batched=True)
```
Se vuoi, puoi creare un sottoinsieme più piccolo del dataset per il fine-tuning così da ridurre il tempo necessario:
```py
>>> small_train_dataset = tokenized_datasets["train"].shuffle(seed=42).select(range(1000))
>>> small_eval_dataset = tokenized_datasets["test"].shuffle(seed=42).select(range(1000))
```
<a id='trainer'></a>
## Addestramento
<frameworkcontent>
<pt>
<Youtube id="nvBXf7s7vTI"/>
ð€ Transformers mette a disposizione la classe [`Trainer`] ottimizzata per addestrare modelli ð€ Transformers, rendendo semplice iniziare l'addestramento senza scrivere manualmente il tuo ciclo di addestramento. L'API [`Trainer`] supporta un'ampia gamma di opzioni e funzionalità di addestramento come logging, gradient accumulation e mixed precision.
Inizia caricando il tuo modello e specificando il numero di etichette (labels) attese. Nel dataset Yelp Review [dataset card](https://huggingface.co/datasets/yelp_review_full#data-fields), sai che ci sono cinque etichette:
```py
>>> from transformers import AutoModelForSequenceClassification
>>> model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased", num_labels=5)
```
<Tip>
Potresti vedere un warning dato che alcuni dei pesi pre-addestrati non sono stati utilizzati e altri pesi sono stati inizializzati casualmente. Non preoccuparti, Ú completamente normale! L'head pre-addestrata del modello BERT viene scartata e rimpiazzata da una classification head inizializzata casualmente. Farai il fine-tuning di questa nuova head del modello sul tuo compito di classificazione, trasferendogli la conoscenza del modello pre-addestrato.
</Tip>
### Iperparametri per il training
Successivamente, crea una classe [`TrainingArguments`] contenente tutti gli iperparametri che si possono regore nonché le variabili per attivare le differenti opzioni di addestramento. Per questa esercitazione puoi iniziare con gli [iperparametri](https://huggingface.co/docs/transformers/main_classes/trainer#transformers.TrainingArguments) di ddestramento predefiniti, ma sentiti libero di sperimentare per trovare la configurazione ottimale per te.
Specifica dove salvare i checkpoints del tuo addestramento:
```py
>>> from transformers import TrainingArguments
>>> training_args = TrainingArguments(output_dir="test_trainer")
```
### Metriche
[`Trainer`] non valuta automaticamente le performance del modello durante l'addestramento. Dovrai passare a [`Trainer`] una funzione che calcola e restituisce le metriche. La libreria ð€ Datasets mette a disposizione una semplice funzione [`accuracy`](https://huggingface.co/metrics/accuracy) che puoi caricare con la funzione `load_metric` (guarda questa [esercitazione](https://huggingface.co/docs/datasets/metrics) per maggiori informazioni):
```py
>>> import numpy as np
>>> from datasets import load_metric
>>> metric = load_metric("accuracy")
```
Richiama `compute` su `metric` per calcolare l'accuratezza delle tue previsioni. Prima di passare le tue previsioni a `compute`, hai bisogno di convertirle in logits (ricorda che tutti i modelli ð€ Transformers restituiscono logits):
```py
>>> def compute_metrics(eval_pred):
... logits, labels = eval_pred
... predictions = np.argmax(logits, axis=-1)
... return metric.compute(predictions=predictions, references=labels)
```
Se preferisci monitorare le tue metriche di valutazione durante il fine-tuning, specifica il parametro `evaluation_strategy` nei tuoi training arguments per restituire le metriche di valutazione ad ogni epoca di addestramento:
```py
>>> from transformers import TrainingArguments, Trainer
>>> training_args = TrainingArguments(output_dir="test_trainer", evaluation_strategy="epoch")
```
### Trainer
Crea un oggetto [`Trainer`] col tuo modello, training arguments, dataset di training e test, e funzione di valutazione:
```py
>>> trainer = Trainer(
... model=model,
... args=training_args,
... train_dataset=small_train_dataset,
... eval_dataset=small_eval_dataset,
... compute_metrics=compute_metrics,
... )
```
Poi metti a punto il modello richiamando [`~transformers.Trainer.train`]:
```py
>>> trainer.train()
```
</pt>
<tf>
<a id='keras'></a>
<Youtube id="rnTGBy2ax1c"/>
I modelli ð€ Transformers supportano anche l'addestramento in TensorFlow usando l'API di Keras.
### Convertire dataset nel formato per TensorFlow
Il [`DefaultDataCollator`] assembla tensori in lotti su cui il modello si addestrerà . Assicurati di specificare di restituire tensori per TensorFlow in `return_tensors`:
```py
>>> from transformers import DefaultDataCollator
>>> data_collator = DefaultDataCollator(return_tensors="tf")
```
<Tip>
[`Trainer`] usa [`DataCollatorWithPadding`] in maniera predefinita in modo da non dover specificare esplicitamente un collettore di dati.
</Tip>
Successivamente, converti i datasets tokenizzati in TensorFlow datasets con il metodo [`to_tf_dataset`](https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Dataset.to_tf_dataset). Specifica il tuo input in `columns` e le tue etichette in `label_cols`:
```py
>>> tf_train_dataset = small_train_dataset.to_tf_dataset(
... columns=["attention_mask", "input_ids", "token_type_ids"],
... label_cols=["labels"],
... shuffle=True,
... collate_fn=data_collator,
... batch_size=8,
... )
>>> tf_validation_dataset = small_eval_dataset.to_tf_dataset(
... columns=["attention_mask", "input_ids", "token_type_ids"],
... label_cols=["labels"],
... shuffle=False,
... collate_fn=data_collator,
... batch_size=8,
... )
```
### Compilazione e addestramento
Carica un modello TensorFlow col numero atteso di etichette:
```py
>>> import tensorflow as tf
>>> from transformers import TFAutoModelForSequenceClassification
>>> model = TFAutoModelForSequenceClassification.from_pretrained("bert-base-cased", num_labels=5)
```
Poi compila e fai il fine-tuning del tuo modello usando [`fit`](https://keras.io/api/models/model_training_apis/) come faresti con qualsiasi altro modello di Keras:
```py
>>> model.compile(
... optimizer=tf.keras.optimizers.Adam(learning_rate=5e-5),
... loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
... metrics=tf.metrics.SparseCategoricalAccuracy(),
... )
>>> model.fit(tf_train_dataset, validation_data=tf_validation_dataset, epochs=3)
```
</tf>
</frameworkcontent>
<a id='pytorch_native'></a>
## Addestramento in PyTorch nativo
<frameworkcontent>
<pt>
<Youtube id="Dh9CL8fyG80"/>
[`Trainer`] si occupa del ciclo di addestramento e ti consente di mettere a punto un modello con una sola riga di codice. Per chi preferisse scrivere un proprio ciclo di addestramento personale, puoi anche fare il fine-tuning di un modello ð€ Transformers in PyTorch nativo.
A questo punto, potresti avere bisogno di riavviare il tuo notebook o eseguire il seguente codice per liberare un po' di memoria:
```py
del model
del pytorch_model
del trainer
torch.cuda.empty_cache()
```
Successivamente, postprocessa manualmente il `tokenized_dataset` per prepararlo ad essere allenato.
1. Rimuovi la colonna `text` perché il modello non accetta testo grezzo come input:
```py
>>> tokenized_datasets = tokenized_datasets.remove_columns(["text"])
```
2. Rinomina la colonna `label` in `labels` perché il modello si aspetta che questo argomento si chiami `labels`:
```py
>>> tokenized_datasets = tokenized_datasets.rename_column("label", "labels")
```
3. Imposta il formato del dataset per farti restituire tensori di PyTorch all'interno delle liste:
```py
>>> tokenized_datasets.set_format("torch")
```
Poi crea un piccolo sottocampione del dataset come visto precedentemente per velocizzare il fine-tuning:
```py
>>> small_train_dataset = tokenized_datasets["train"].shuffle(seed=42).select(range(1000))
>>> small_eval_dataset = tokenized_datasets["test"].shuffle(seed=42).select(range(1000))
```
### DataLoader
Crea un `DataLoader` per i tuoi datasets di train e test così puoi iterare sui lotti di dati:
```py
>>> from torch.utils.data import DataLoader
>>> train_dataloader = DataLoader(small_train_dataset, shuffle=True, batch_size=8)
>>> eval_dataloader = DataLoader(small_eval_dataset, batch_size=8)
```
Carica il tuo modello con il numero atteso di etichette:
```py
>>> from transformers import AutoModelForSequenceClassification
>>> model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased", num_labels=5)
```
### Ottimizzatore e learning rate scheduler
Crea un ottimizzatore e il learning rate scheduler per fare il fine-tuning del modello. Usa l'ottimizzatore [`AdamW`](https://pytorch.org/docs/stable/generated/torch.optim.AdamW.html) di PyTorch:
```py
>>> from torch.optim import AdamW
>>> optimizer = AdamW(model.parameters(), lr=5e-5)
```
Crea il learning rate scheduler predefinito da [`Trainer`]:
```py
>>> from transformers import get_scheduler
>>> num_epochs = 3
>>> num_training_steps = num_epochs * len(train_dataloader)
>>> lr_scheduler = get_scheduler(
... name="linear", optimizer=optimizer, num_warmup_steps=0, num_training_steps=num_training_steps
... )
```
Infine specifica come `device` da usare una GPU se ne hai una. Altrimenti, l'addestramento su una CPU può richiedere diverse ore invece di un paio di minuti.
```py
>>> import torch
>>> device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
>>> model.to(device)
```
<Tip>
Ottieni l'accesso gratuito a una GPU sul cloud se non ne possiedi una usando un notebook sul web come [Colaboratory](https://colab.research.google.com/) o [SageMaker StudioLab](https://studiolab.sagemaker.aws/).
</Tip>
Ottimo, adesso possiamo addestrare! ð¥³
### Training loop
Per tenere traccia dei tuoi progressi durante l'addestramento, usa la libreria [tqdm](https://tqdm.github.io/) per aggiungere una progress bar sopra il numero dei passi di addestramento:
```py
>>> from tqdm.auto import tqdm
>>> progress_bar = tqdm(range(num_training_steps))
>>> model.train()
>>> for epoch in range(num_epochs):
... for batch in train_dataloader:
... batch = {k: v.to(device) for k, v in batch.items()}
... outputs = model(**batch)
... loss = outputs.loss
... loss.backward()
... optimizer.step()
... lr_scheduler.step()
... optimizer.zero_grad()
... progress_bar.update(1)
```
### Metriche
Proprio come Ú necessario aggiungere una funzione di valutazione del [`Trainer`], Ú necessario fare lo stesso quando si scrive il proprio ciclo di addestramento. Ma invece di calcolare e riportare la metrica alla fine di ogni epoca, questa volta accumulerai tutti i batch con [`add_batch`](https://huggingface.co/docs/datasets/package_reference/main_classes?highlight=add_batch#datasets.Metric.add_batch) e calcolerai la metrica alla fine.
```py
>>> metric = load_metric("accuracy")
>>> model.eval()
>>> for batch in eval_dataloader:
... batch = {k: v.to(device) for k, v in batch.items()}
... with torch.no_grad():
... outputs = model(**batch)
... logits = outputs.logits
... predictions = torch.argmax(logits, dim=-1)
... metric.add_batch(predictions=predictions, references=batch["labels"])
>>> metric.compute()
```
</pt>
</frameworkcontent>
<a id='additional-resources'></a>
## Altre risorse
Per altri esempi sul fine-tuning, fai riferimento a:
- [ð€ Transformers Examples](https://github.com/huggingface/transformers/tree/main/examples) include scripts per addestrare compiti comuni di NLP in PyTorch e TensorFlow.
- [ð€ Transformers Notebooks](notebooks) contiene diversi notebooks su come mettere a punto un modello per compiti specifici in PyTorch e TensorFlow.
| 0 |
hf_public_repos/transformers/docs/source | hf_public_repos/transformers/docs/source/it/perf_infer_gpu_many.md | <!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Inferenza Efficiente su GPU Multiple
Questo documento contiene informazioni su come fare inferenza in maniera efficiente su GPU multiple.
<Tip>
Nota: Un setup con GPU multiple può utilizzare la maggior parte delle strategie descritte nella [sezione con GPU singola](./perf_infer_gpu_one). Tuttavia, Ú necessario conoscere delle tecniche semplici che possono essere utilizzate per un risultato migliore.
</Tip>
## `BetterTransformer` per inferenza più rapida
Abbiamo recentemente integrato `BetterTransformer` per inferenza più rapida su multi-GPU per modelli su testo, immagini e audio. Controlla il documento con queste integrazioni [qui](https://huggingface.co/docs/optimum/bettertransformer/overview) per maggiori dettagli.
| 0 |
hf_public_repos/transformers/docs/source | hf_public_repos/transformers/docs/source/it/quicktour.md | <!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Quick tour
[[open-in-colab]]
Entra in azione con ð€ Transformers! Inizia utilizzando [`pipeline`] per un'inferenza veloce, carica un modello pre-allenato e un tokenizer con una [AutoClass](./model_doc/auto) per risolvere i tuoi compiti legati a testo, immagini o audio.
<Tip>
Tutti gli esempi di codice presenti in questa documentazione hanno un pulsante in alto a sinistra che permette di selezionare tra PyTorch e TensorFlow. Se
questo non Ú presente, ci si aspetta che il codice funzioni per entrambi i backend senza alcun cambiamento.
</Tip>
## Pipeline
[`pipeline`] Ú il modo più semplice per utilizzare un modello pre-allenato per un dato compito.
<Youtube id="tiZFewofSLM"/>
La [`pipeline`] supporta molti compiti comuni:
**Testo**:
* Analisi del Sentimento (Sentiment Analysis, in inglese): classifica la polarità di un testo dato.
* Generazione del Testo (Text Generation, in inglese): genera del testo a partire da un dato input.
* Riconoscimento di Entità (Name Entity Recognition o NER, in inglese): etichetta ogni parola con l'entità che questa rappresenta (persona, data, luogo, ecc.).
* Rispondere a Domande (Question answering, in inglese): estrae la risposta da un contesto, dato del contesto e una domanda.
* Riempimento di Maschere (Fill-mask, in inglese): riempie gli spazi mancanti in un testo che ha parole mascherate.
* Riassumere (Summarization, in inglese): genera una sintesi di una lunga sequenza di testo o di un documento.
* Traduzione (Translation, in inglese): traduce un testo in un'altra lingua.
* Estrazione di Caratteristiche (Feature Extraction, in inglese): crea un tensore che rappresenta un testo.
**Immagini**:
* Classificazione di Immagini (Image Classification, in inglese): classifica un'immagine.
* Segmentazione di Immagini (Image Segmentation, in inglese): classifica ogni pixel di un'immagine.
* Rilevazione di Oggetti (Object Detection, in inglese): rileva oggetti all'interno di un'immagine.
**Audio**:
* Classificazione di Audio (Audio Classification, in inglese): assegna un'etichetta ad un segmento di audio dato.
* Riconoscimento Vocale Automatico (Automatic Speech Recognition o ASR, in inglese): trascrive il contenuto di un audio dato in un testo.
<Tip>
Per maggiori dettagli legati alla [`pipeline`] e ai compiti ad essa associati, fai riferimento alla documentazione [qui](./main_classes/pipelines).
</Tip>
### Utilizzo della Pipeline
Nel seguente esempio, utilizzerai la [`pipeline`] per l'analisi del sentimento.
Installa le seguenti dipendenze se non lo hai già fatto:
<frameworkcontent>
<pt>
```bash
pip install torch
```
</pt>
<tf>
```bash
pip install tensorflow
```
</tf>
</frameworkcontent>
Importa [`pipeline`] e specifica il compito che vuoi completare:
```py
>>> from transformers import pipeline
>>> classificatore = pipeline("sentiment-analysis", model="MilaNLProc/feel-it-italian-sentiment")
```
La pipeline scarica e salva il [modello pre-allenato](https://huggingface.co/MilaNLProc/feel-it-italian-sentiment) e il tokenizer per l'analisi del sentimento. Se non avessimo scelto un modello, la pipeline ne avrebbe scelto uno di default. Ora puoi utilizzare il `classifier` sul tuo testo obiettivo:
```py
>>> classificatore("Siamo molto felici di mostrarti la libreria ð€ Transformers.")
[{'label': 'positive', 'score': 0.9997}]
```
Per più di una frase, passa una lista di frasi alla [`pipeline`] la quale restituirà una lista di dizionari:
```py
>>> risultati = classificatore(
... ["Siamo molto felici di mostrarti la libreria ð€ Transformers.", "Speriamo te non la odierai."]
... )
>>> for risultato in risultati:
... print(f"etichetta: {risultato['label']}, con punteggio: {round(risultato['score'], 4)}")
etichetta: positive, con punteggio: 0.9998
etichetta: negative, con punteggio: 0.9998
```
La [`pipeline`] può anche iterare su un dataset intero. Inizia installando la libreria [ð€ Datasets](https://huggingface.co/docs/datasets/):
```bash
pip install datasets
```
Crea una [`pipeline`] con il compito che vuoi risolvere e con il modello che vuoi utilizzare.
```py
>>> import torch
>>> from transformers import pipeline
>>> riconoscitore_vocale = pipeline(
... "automatic-speech-recognition", model="radiogroup-crits/wav2vec2-xls-r-1b-italian-doc4lm-5gram"
... )
```
Poi, carica un dataset (vedi ð€ Datasets [Quick Start](https://huggingface.co/docs/datasets/quickstart) per maggiori dettagli) sul quale vuoi iterare. Per esempio, carichiamo il dataset [MInDS-14](https://huggingface.co/datasets/PolyAI/minds14):
```py
>>> from datasets import load_dataset, Audio
>>> dataset = load_dataset("PolyAI/minds14", name="it-IT", split="train") # doctest: +IGNORE_RESULT
```
Dobbiamo assicurarci che la frequenza di campionamento del set di dati corrisponda alla frequenza di campionamento con cui Ú stato addestrato `radiogroup-crits/wav2vec2-xls-r-1b-italian-doc4lm-5gram`.
```py
>>> dataset = dataset.cast_column("audio", Audio(sampling_rate=riconoscitore_vocale.feature_extractor.sampling_rate))
```
I file audio vengono caricati automaticamente e ri-campionati quando chiamiamo la colonna "audio".
Estraiamo i vettori delle forme d'onda grezze delle prime 4 osservazioni e passiamoli come lista alla pipeline:
```py
>>> risultato = riconoscitore_vocale(dataset[:4]["audio"])
>>> print([d["text"] for d in risultato])
['dovrei caricare dei soldi sul mio conto corrente', 'buongiorno e senza vorrei depositare denaro sul mio conto corrente come devo fare per cortesia', 'sì salve vorrei depositare del denaro sul mio conto', 'e buon pomeriggio vorrei depositare dei soldi sul mio conto bancario volleo sapere come posso fare se e posso farlo online ed un altro conto o andandoo tramite bancomut']
```
Per un dataset più grande dove gli input sono di dimensione maggiore (come nel parlato/audio o nella visione), dovrai passare un generatore al posto di una lista che carica tutti gli input in memoria. Guarda la [documentazione della pipeline](./main_classes/pipelines) per maggiori informazioni.
### Utilizzare un altro modello e tokenizer nella pipeline
La [`pipeline`] può ospitare qualsiasi modello del [Model Hub](https://huggingface.co/models), rendendo semplice l'adattamento della [`pipeline`] per altri casi d'uso. Per esempio, se si vuole un modello capace di trattare testo in francese, usa i tag presenti nel Model Hub in modo da filtrare per ottenere un modello appropriato. Il miglior risultato filtrato restituisce un modello multi-lingua [BERT model](https://huggingface.co/nlptown/bert-base-multilingual-uncased-sentiment) fine-tuned per l'analisi del sentimento. Ottimo, utilizziamo questo modello!
```py
>>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment"
```
<frameworkcontent>
<pt>
Usa [`AutoModelForSequenceClassification`] e [`AutoTokenizer`] per caricare il modello pre-allenato e il suo tokenizer associato (maggiori informazioni su una `AutoClass` in seguito):
```py
>>> from transformers import AutoTokenizer, AutoModelForSequenceClassification
>>> model = AutoModelForSequenceClassification.from_pretrained(model_name)
>>> tokenizer = AutoTokenizer.from_pretrained(model_name)
```
</pt>
<tf>
Usa [`TFAutoModelForSequenceClassification`] e [`AutoTokenizer`] per caricare il modello pre-allenato e il suo tokenizer associato (maggiori informazioni su una `TFAutoClass` in seguito):
```py
>>> from transformers import AutoTokenizer, TFAutoModelForSequenceClassification
>>> model = TFAutoModelForSequenceClassification.from_pretrained(model_name)
>>> tokenizer = AutoTokenizer.from_pretrained(model_name)
```
</tf>
</frameworkcontent>
Poi puoi specificare il modello e il tokenizer nella [`pipeline`], e applicare il `classifier` sul tuo testo obiettivo:
```py
>>> classifier = pipeline("sentiment-analysis", model=model, tokenizer=tokenizer)
>>> classifier("Nous sommes trÚs heureux de vous présenter la bibliothÚque ð€ Transformers.")
[{'label': '5 stars', 'score': 0.7273}]
```
Se non riesci a trovare un modello per il tuo caso d'uso, dovrai fare fine-tuning di un modello pre-allenato sui tuoi dati. Dai un'occhiata al nostro tutorial [fine-tuning tutorial](./training) per imparare come. Infine, dopo che hai completato il fine-tuning del tuo modello pre-allenato, considera per favore di condividerlo (vedi il tutorial [qui](./model_sharing)) con la comunità sul Model Hub per democratizzare l'NLP! ð€
## AutoClass
<Youtube id="AhChOFRegn4"/>
Al suo interno, le classi [`AutoModelForSequenceClassification`] e [`AutoTokenizer`] lavorano assieme per dare potere alla [`pipeline`]. Una [AutoClass](./model_doc/auto) Ú una scorciatoia che automaticamente recupera l'architettura di un modello pre-allenato a partire dal suo nome o path. Hai solo bisogno di selezionare la `AutoClass` appropriata per il tuo compito e il suo tokenizer associato con [`AutoTokenizer`].
Ritorniamo al nostro esempio e vediamo come puoi utilizzare la `AutoClass` per replicare i risultati della [`pipeline`].
### AutoTokenizer
Un tokenizer Ú responsabile dell'elaborazione del testo in modo da trasformarlo in un formato comprensibile dal modello. Per prima cosa, il tokenizer dividerà il testo in parole chiamate *token*. Ci sono diverse regole che governano il processo di tokenizzazione, tra cui come dividere una parola e a quale livello (impara di più sulla tokenizzazione [qui](./tokenizer_summary)). La cosa più importante da ricordare comunque Ú che hai bisogno di inizializzare il tokenizer con lo stesso nome del modello in modo da assicurarti che stai utilizzando le stesse regole di tokenizzazione con cui il modello Ú stato pre-allenato.
Carica un tokenizer con [`AutoTokenizer`]:
```py
>>> from transformers import AutoTokenizer
>>> nome_del_modello = "nlptown/bert-base-multilingual-uncased-sentiment"
>>> tokenizer = AutoTokenizer.from_pretrained(nome_del_modello)
```
Dopodiché, il tokenizer converte i token in numeri in modo da costruire un tensore come input del modello. Questo Ú conosciuto come il *vocabolario* del modello.
Passa il tuo testo al tokenizer:
```py
>>> encoding = tokenizer("Siamo molto felici di mostrarti la libreria ð€ Transformers.")
>>> print(encoding)
{'input_ids': [101, 56821, 10132, 14407, 13019, 13007, 10120, 47201, 10330, 10106, 91686, 100, 58263, 119, 102],
'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]}
```
Il tokenizer restituirà un dizionario contenente:
* [input_ids](./glossary#input-ids): rappresentazioni numeriche dei tuoi token.
* [attention_mask](.glossary#attention-mask): indica quali token devono essere presi in considerazione.
Come con la [`pipeline`], il tokenizer accetterà una lista di input. In più, il tokenizer può anche completare (pad, in inglese) e troncare il testo in modo da restituire un lotto (batch, in inglese) di lunghezza uniforme:
<frameworkcontent>
<pt>
```py
>>> pt_batch = tokenizer(
... ["Siamo molto felici di mostrarti la libreria ð€ Transformers.", "Speriamo te non la odierai."],
... padding=True,
... truncation=True,
... max_length=512,
... return_tensors="pt",
... )
```
</pt>
<tf>
```py
>>> tf_batch = tokenizer(
... ["Siamo molto felici di mostrarti la libreria ð€ Transformers.", "Speriamo te non la odierai."],
... padding=True,
... truncation=True,
... max_length=512,
... return_tensors="tf",
... )
```
</tf>
</frameworkcontent>
Leggi il tutorial sul [preprocessing](./preprocessing) per maggiori dettagli sulla tokenizzazione.
### AutoModel
<frameworkcontent>
<pt>
ð€ Transformers fornisce un metodo semplice e unificato per caricare istanze pre-allenate. Questo significa che puoi caricare un [`AutoModel`] come caricheresti un [`AutoTokenizer`]. L'unica differenza Ú selezionare l'[`AutoModel`] corretto per il compito di interesse. Dato che stai facendo classificazione di testi, o sequenze, carica [`AutoModelForSequenceClassification`]:
```py
>>> from transformers import AutoModelForSequenceClassification
>>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment"
>>> pt_model = AutoModelForSequenceClassification.from_pretrained(model_name)
```
<Tip>
Guarda il [task summary](./task_summary) per sapere quale classe di [`AutoModel`] utilizzare per quale compito.
</Tip>
Ora puoi passare il tuo lotto di input pre-processati direttamente al modello. Devi solo spacchettare il dizionario aggiungendo `**`:
```py
>>> pt_outputs = pt_model(**pt_batch)
```
Il modello produrrà le attivazioni finali nell'attributo `logits`. Applica la funzione softmax a `logits` per ottenere le probabilità :
```py
>>> from torch import nn
>>> pt_predictions = nn.functional.softmax(pt_outputs.logits, dim=-1)
>>> print(pt_predictions)
tensor([[0.0041, 0.0037, 0.0203, 0.2005, 0.7713],
[0.3766, 0.3292, 0.1832, 0.0558, 0.0552]], grad_fn=<SoftmaxBackward0>)
```
</pt>
<tf>
ð€ Transformers fornisce un metodo semplice e unificato per caricare istanze pre-allenate. Questo significa che puoi caricare un [`TFAutoModel`] come caricheresti un [`AutoTokenizer`]. L'unica differenza Ú selezionare il [`TFAutoModel`] corretto per il compito di interesse. Dato che stai facendo classificazione di testi, o sequenze, carica [`TFAutoModelForSequenceClassification`]:
```py
>>> from transformers import TFAutoModelForSequenceClassification
>>> nome_del_modello = "nlptown/bert-base-multilingual-uncased-sentiment"
>>> tf_model = TFAutoModelForSequenceClassification.from_pretrained(nome_del_modello)
```
<Tip>
Guarda il [task summary](./task_summary) per sapere quale classe di [`AutoModel`] utilizzare per quale compito.
</Tip>
Ora puoi passare il tuo lotto di input pre-processati direttamente al modello passando le chiavi del dizionario al tensore:
```py
>>> tf_outputs = tf_model(tf_batch)
```
Il modello produrrà le attivazioni finali nell'attributo `logits`. Applica la funzione softmax a `logits` per ottenere le probabilità :
```py
>>> import tensorflow as tf
>>> tf_predictions = tf.nn.softmax(tf_outputs.logits, axis=-1)
>>> tf_predictions # doctest: +IGNORE_RESULT
```
</tf>
</frameworkcontent>
<Tip>
Tutti i modelli di ð€ Transformers (PyTorch e TensorFlow) restituiscono i tensori *prima* della funzione finale
di attivazione (come la softmax) perché la funzione di attivazione finale viene spesso unita a quella di perdita.
</Tip>
I modelli sono [`torch.nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) o [`tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model) standard così puoi utilizzarli all'interno del tuo training loop usuale. Tuttavia, per rendere le cose più semplici, ð€ Transformers fornisce una classe [`Trainer`] per PyTorch che aggiunge delle funzionalità per l'allenamento distribuito, precisione mista, e altro ancora. Per TensorFlow, puoi utilizzare il metodo `fit` di [Keras](https://keras.io/). Fai riferimento al [tutorial per il training](./training) per maggiori dettagli.
<Tip>
Gli output del modello di ð€ Transformers sono delle dataclasses speciali in modo che i loro attributi vengano auto-completati all'interno di un IDE.
Gli output del modello si comportano anche come una tupla o un dizionario (ad esempio, puoi indicizzare con un intero, una slice o una stringa) nel qual caso gli attributi che sono `None` vengono ignorati.
</Tip>
### Salva un modello
<frameworkcontent>
<pt>
Una volta completato il fine-tuning del tuo modello, puoi salvarlo con il suo tokenizer utilizzando [`PreTrainedModel.save_pretrained`]:
```py
>>> pt_save_directory = "./pt_save_pretrained"
>>> tokenizer.save_pretrained(pt_save_directory) # doctest: +IGNORE_RESULT
>>> pt_model.save_pretrained(pt_save_directory)
```
Quando desideri utilizzare il tuo modello nuovamente, puoi ri-caricarlo con [`PreTrainedModel.from_pretrained`]:
```py
>>> pt_model = AutoModelForSequenceClassification.from_pretrained("./pt_save_pretrained")
```
</pt>
<tf>
Una volta completato il fine-tuning del tuo modello, puoi salvarlo con il suo tokenizer utilizzando [`TFPreTrainedModel.save_pretrained`]:
```py
>>> tf_save_directory = "./tf_save_pretrained"
>>> tokenizer.save_pretrained(tf_save_directory) # doctest: +IGNORE_RESULT
>>> tf_model.save_pretrained(tf_save_directory)
```
Quando desideri utilizzare il tuo modello nuovamente, puoi ri-caricarlo con [`TFPreTrainedModel.from_pretrained`]:
```py
>>> tf_model = TFAutoModelForSequenceClassification.from_pretrained("./tf_save_pretrained")
```
</tf>
</frameworkcontent>
Una caratteristica particolarmente interessante di ð€ Transformers Ú la sua abilità di salvare un modello e ri-caricarlo sia come modello di PyTorch che di TensorFlow. I parametri `from_pt` o `from_tf` possono convertire un modello da un framework all'altro:
<frameworkcontent>
<pt>
```py
>>> from transformers import AutoModel
>>> tokenizer = AutoTokenizer.from_pretrained(tf_save_directory)
>>> pt_model = AutoModelForSequenceClassification.from_pretrained(tf_save_directory, from_tf=True)
```
</pt>
<tf>
```py
>>> from transformers import TFAutoModel
>>> tokenizer = AutoTokenizer.from_pretrained(pt_save_directory)
>>> tf_model = TFAutoModelForSequenceClassification.from_pretrained(pt_save_directory, from_pt=True)
```
</tf>
</frameworkcontent>
| 0 |
hf_public_repos/transformers/docs/source | hf_public_repos/transformers/docs/source/it/run_scripts.md | <!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Addestramento con script
Insieme ai [notebooks](./noteboks/README) ð€ Transformers, ci sono anche esempi di script che dimostrano come addestrare un modello per un task con [PyTorch](https://github.com/huggingface/transformers/tree/main/examples/pytorch), [TensorFlow](https://github.com/huggingface/transformers/tree/main/examples/tensorflow), o [JAX/Flax](https://github.com/huggingface/transformers/tree/main/examples/flax).
Troverai anche script che abbiamo usato nei nostri [progetti di ricerca](https://github.com/huggingface/transformers/tree/main/examples/research_projects) e [precedenti esempi](https://github.com/huggingface/transformers/tree/main/examples/legacy) a cui contribuisce per lo più la comunità . Questi script non sono attivamente mantenuti e richiedono una specifica versione di ð€ Transformers che sarà molto probabilmente incompatibile con l'ultima versione della libreria.
Non Ú dato per scontato che gli script di esempio funzionino senza apportare modifiche per ogni problema, bensì potrebbe essere necessario adattare lo script al tuo caso specifico. Per aiutarti in ciò, la maggioranza degli script espone le modalità di pre-processamento dei dati, consentendoti di modificare lo script come preferisci.
Per qualsiasi feature che vorresti implementare in uno script d'esempio, per favore discutine nel [forum](https://discuss.huggingface.co/) o in un'[issue](https://github.com/huggingface/transformers/issues) prima di inviare una Pull Request. Mentre accogliamo con piacere la correzione di bug, Ú più improbabile che faremo la stessa con una PR che aggiunge funzionalità sacrificando la leggibilità .
Questa guida ti mostrerà come eseguire uno script di esempio relativo al task di summarization in [PyTorch](https://github.com/huggingface/transformers/tree/main/examples/pytorch/summarization) e [TensorFlow](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/summarization). Tutti gli esempi funzioneranno con entrambi i framework a meno che non sia specificato altrimenti.
## Installazione
Per eseguire con successo l'ultima versione degli script di esempio, devi **installare ð€ Transformers dalla fonte** in un nuovo ambiente virtuale:
```bash
git clone https://github.com/huggingface/transformers
cd transformers
pip install .
```
Per le precedenti versioni degli script di esempio, clicca sul pulsante di seguito:
<details>
<summary>Esempi per versioni precedenti di ð€ Transformers</summary>
<ul>
<li><a href="https://github.com/huggingface/transformers/tree/v4.5.1/examples">v4.5.1</a></li>
<li><a href="https://github.com/huggingface/transformers/tree/v4.4.2/examples">v4.4.2</a></li>
<li><a href="https://github.com/huggingface/transformers/tree/v4.3.3/examples">v4.3.3</a></li>
<li><a href="https://github.com/huggingface/transformers/tree/v4.2.2/examples">v4.2.2</a></li>
<li><a href="https://github.com/huggingface/transformers/tree/v4.1.1/examples">v4.1.1</a></li>
<li><a href="https://github.com/huggingface/transformers/tree/v4.0.1/examples">v4.0.1</a></li>
<li><a href="https://github.com/huggingface/transformers/tree/v3.5.1/examples">v3.5.1</a></li>
<li><a href="https://github.com/huggingface/transformers/tree/v3.4.0/examples">v3.4.0</a></li>
<li><a href="https://github.com/huggingface/transformers/tree/v3.3.1/examples">v3.3.1</a></li>
<li><a href="https://github.com/huggingface/transformers/tree/v3.2.0/examples">v3.2.0</a></li>
<li><a href="https://github.com/huggingface/transformers/tree/v3.1.0/examples">v3.1.0</a></li>
<li><a href="https://github.com/huggingface/transformers/tree/v3.0.2/examples">v3.0.2</a></li>
<li><a href="https://github.com/huggingface/transformers/tree/v2.11.0/examples">v2.11.0</a></li>
<li><a href="https://github.com/huggingface/transformers/tree/v2.10.0/examples">v2.10.0</a></li>
<li><a href="https://github.com/huggingface/transformers/tree/v2.9.1/examples">v2.9.1</a></li>
<li><a href="https://github.com/huggingface/transformers/tree/v2.8.0/examples">v2.8.0</a></li>
<li><a href="https://github.com/huggingface/transformers/tree/v2.7.0/examples">v2.7.0</a></li>
<li><a href="https://github.com/huggingface/transformers/tree/v2.6.0/examples">v2.6.0</a></li>
<li><a href="https://github.com/huggingface/transformers/tree/v2.5.1/examples">v2.5.1</a></li>
<li><a href="https://github.com/huggingface/transformers/tree/v2.4.0/examples">v2.4.0</a></li>
<li><a href="https://github.com/huggingface/transformers/tree/v2.3.0/examples">v2.3.0</a></li>
<li><a href="https://github.com/huggingface/transformers/tree/v2.2.0/examples">v2.2.0</a></li>
<li><a href="https://github.com/huggingface/transformers/tree/v2.1.0/examples">v2.1.1</a></li>
<li><a href="https://github.com/huggingface/transformers/tree/v2.0.0/examples">v2.0.0</a></li>
<li><a href="https://github.com/huggingface/transformers/tree/v1.2.0/examples">v1.2.0</a></li>
<li><a href="https://github.com/huggingface/transformers/tree/v1.1.0/examples">v1.1.0</a></li>
<li><a href="https://github.com/huggingface/transformers/tree/v1.0.0/examples">v1.0.0</a></li>
</ul>
</details>
Successivamente, cambia la tua attuale copia di ð€ Transformers specificandone la versione, ad esempio v3.5.1:
```bash
git checkout tags/v3.5.1
```
Dopo aver configurato correttamente la versione della libreria, naviga nella cartella degli esempi di tua scelta e installa i requisiti:
```bash
pip install -r requirements.txt
```
## Esegui uno script
<frameworkcontent>
<pt>
Lo script di esempio scarica e pre-processa un dataset dalla libreria ð€ [Datasets](https://huggingface.co/docs/datasets/). Successivamente, lo script esegue il fine-tuning su un dataset usando il [Trainer](https://huggingface.co/docs/transformers/main_classes/trainer) su un'architettura che supporta la summarization. Il seguente esempio mostra come eseguire il fine-tuning di [T5-small](https://huggingface.co/t5-small) sul dataset [CNN/DailyMail](https://huggingface.co/datasets/cnn_dailymail). Il modello T5 richiede un parametro addizionale `source_prefix` a causa del modo in cui Ú stato addestrato. Questo prefisso permette a T5 di sapere che si tratta di un task di summarization.
```bash
python examples/pytorch/summarization/run_summarization.py \
--model_name_or_path t5-small \
--do_train \
--do_eval \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--source_prefix "summarize: " \
--output_dir /tmp/tst-summarization \
--per_device_train_batch_size=4 \
--per_device_eval_batch_size=4 \
--overwrite_output_dir \
--predict_with_generate
```
</pt>
<tf>
Lo script di esempio scarica e pre-processa un dataset dalla libreria ð€ [Datasets](https://huggingface.co/docs/datasets/). Successivamente, lo script esegue il fine-tuning su un dataset usando Keras su un'architettura che supporta la summarization. Il seguente esempio mostra come eseguire il fine-tuning di [T5-small](https://huggingface.co/t5-small) sul dataset [CNN/DailyMail](https://huggingface.co/datasets/cnn_dailymail). Il modello T5 richiede un parametro addizionale `source_prefix` a causa del modo in cui Ú stato addestrato. Questo prefisso permette a T5 di sapere che si tratta di un task di summarization.
```bash
python examples/tensorflow/summarization/run_summarization.py \
--model_name_or_path t5-small \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--output_dir /tmp/tst-summarization \
--per_device_train_batch_size 8 \
--per_device_eval_batch_size 16 \
--num_train_epochs 3 \
--do_train \
--do_eval
```
</tf>
</frameworkcontent>
## Addestramento distribuito e precisione mista
Il [Trainer](https://huggingface.co/docs/transformers/main_classes/trainer) supporta l'addestramento distribuito e la precisione mista, che significa che puoi anche usarla in uno script. Per abilitare entrambe le funzionalità :
- Aggiunto l'argomento `fp16` per abilitare la precisione mista.
- Imposta un numero di GPU da usare con l'argomento `nproc_per_node`.
```bash
torchrun \
--nproc_per_node 8 pytorch/summarization/run_summarization.py \
--fp16 \
--model_name_or_path t5-small \
--do_train \
--do_eval \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--source_prefix "summarize: " \
--output_dir /tmp/tst-summarization \
--per_device_train_batch_size=4 \
--per_device_eval_batch_size=4 \
--overwrite_output_dir \
--predict_with_generate
```
Gli script TensorFlow utilizzano una [`MirroredStrategy`](https://www.tensorflow.org/guide/distributed_training#mirroredstrategy) per il training distribuito e non devi aggiungere alcun argomento addizionale allo script di training. Lo script TensorFlow userà multiple GPU in modo predefinito se quest'ultime sono disponibili:
## Esegui uno script su TPU
<frameworkcontent>
<pt>
Le Tensor Processing Units (TPU) sono state progettate per migliorare le prestazioni. PyTorch supporta le TPU con il compilatore per deep learning [XLA](https://www.tensorflow.org/xla) (guarda [questo link](https://github.com/pytorch/xla/blob/master/README.md) per maggiori dettagli). Per usare una TPU, avvia lo script `xla_spawn.py` e usa l'argomento `num_cores` per impostare il numero di core TPU che intendi usare.
```bash
python xla_spawn.py --num_cores 8 \
summarization/run_summarization.py \
--model_name_or_path t5-small \
--do_train \
--do_eval \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--source_prefix "summarize: " \
--output_dir /tmp/tst-summarization \
--per_device_train_batch_size=4 \
--per_device_eval_batch_size=4 \
--overwrite_output_dir \
--predict_with_generate
```
</pt>
<tf>
Le Tensor Processing Units (TPU) sono state progettate per migliorare le prestazioni. Gli script TensorFlow utilizzano una [`TPUStrategy`](https://www.tensorflow.org/guide/distributed_training#tpustrategy) per eseguire l'addestramento su TPU. Per usare una TPU, passa il nome della risorsa TPU all'argomento `tpu`.
```bash
python run_summarization.py \
--tpu name_of_tpu_resource \
--model_name_or_path t5-small \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--output_dir /tmp/tst-summarization \
--per_device_train_batch_size 8 \
--per_device_eval_batch_size 16 \
--num_train_epochs 3 \
--do_train \
--do_eval
```
</tf>
</frameworkcontent>
## Esegui uno script con ð€ Accelerate
ð€ [Accelerate](https://huggingface.co/docs/accelerate) Ú una libreria compatibile solo con PyTorch che offre un metodo unificato per addestrare modelli su diverse tipologie di configurazioni (CPU, multiple GPU, TPU) mantenendo una completa visibilità rispetto al ciclo di training di PyTorch. Assicurati di aver effettuato l'installazione di ð€ Accelerate, nel caso non lo avessi fatto:
> Nota: dato che Accelerate Ú in rapido sviluppo, Ú necessario installare la versione proveniente da git per eseguire gli script:
```bash
pip install git+https://github.com/huggingface/accelerate
```
Invece che usare lo script `run_summarization.py`, devi usare lo script `run_summarization_no_trainer.py`. Gli script supportati in ð€ Accelerate avranno un file chiamato `task_no_trainer.py` nella rispettiva cartella. Per iniziare, esegui il seguente comando per creare e salvare un file di configurazione:
```bash
accelerate config
```
Testa la tua configurazione per assicurarti della sua correttezza:
```bash
accelerate test
```
Ora sei pronto per avviare l'addestramento:
```bash
accelerate launch run_summarization_no_trainer.py \
--model_name_or_path t5-small \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--source_prefix "summarize: " \
--output_dir ~/tmp/tst-summarization
```
## Uso di un dataset personalizzato
Lo script di summarization supporta dataset personalizzati purché siano file CSV o JSON Line. Quando usi il tuo dataset, devi specificare diversi argomenti aggiuntivi:
- `train_file` e `validation_file` specificano dove si trovano i file di addestramento e validazione.
- `text_column` Ú il file di input da riassumere.
- `summary_column` Ú il file di destinazione per l'output.
Uno script di summarization usando un dataset personalizzato sarebbe simile a questo:
```bash
python examples/pytorch/summarization/run_summarization.py \
--model_name_or_path t5-small \
--do_train \
--do_eval \
--train_file path_to_csv_or_jsonlines_file \
--validation_file path_to_csv_or_jsonlines_file \
--text_column text_column_name \
--summary_column summary_column_name \
--source_prefix "summarize: " \
--output_dir /tmp/tst-summarization \
--overwrite_output_dir \
--per_device_train_batch_size=4 \
--per_device_eval_batch_size=4 \
--predict_with_generate
```
## Testare uno script
à spesso una buona idea avviare il tuo script su un numero inferiore di esempi tratti dal dataset, per assicurarti che tutto funzioni come previsto prima di eseguire lo script sull'intero dataset, che potrebbe necessitare di ore. Usa i seguenti argomenti per limitare il dataset ad un massimo numero di esempi:
- `max_train_samples`
- `max_eval_samples`
- `max_predict_samples`
```bash
python examples/pytorch/summarization/run_summarization.py \
--model_name_or_path t5-small \
--max_train_samples 50 \
--max_eval_samples 50 \
--max_predict_samples 50 \
--do_train \
--do_eval \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--source_prefix "summarize: " \
--output_dir /tmp/tst-summarization \
--per_device_train_batch_size=4 \
--per_device_eval_batch_size=4 \
--overwrite_output_dir \
--predict_with_generate
```
Non tutti gli esempi di script supportano l'argomento `max_predict_samples`. Se non sei sicuro circa il supporto di questo argomento da parte del tuo script, aggiungi l'argomento `-h` per controllare:
```bash
examples/pytorch/summarization/run_summarization.py -h
```
## Riavviare addestramento da un checkpoint
Un'altra utile opzione Ú riavviare un addestramento da un checkpoint precedente. Questo garantirà che tu possa riprendere da dove hai interrotto senza ricominciare se l'addestramento viene interrotto. Ci sono due metodi per riavviare l'addestramento da un checkpoint:
Il primo metodo usa l'argomento `output_dir previous_output_dir` per riavviare l'addestramento dall'ultima versione del checkpoint contenuto in `output_dir`. In questo caso, dovresti rimuovere `overwrite_output_dir`:
```bash
python examples/pytorch/summarization/run_summarization.py
--model_name_or_path t5-small \
--do_train \
--do_eval \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--source_prefix "summarize: " \
--output_dir /tmp/tst-summarization \
--per_device_train_batch_size=4 \
--per_device_eval_batch_size=4 \
--output_dir previous_output_dir \
--predict_with_generate
```
Il secondo metodo usa l'argomento `resume_from_checkpoint path_to_specific_checkpoint` per riavviare un addestramento da una specifica cartella di checkpoint.
```bash
python examples/pytorch/summarization/run_summarization.py
--model_name_or_path t5-small \
--do_train \
--do_eval \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--source_prefix "summarize: " \
--output_dir /tmp/tst-summarization \
--per_device_train_batch_size=4 \
--per_device_eval_batch_size=4 \
--overwrite_output_dir \
--resume_from_checkpoint path_to_specific_checkpoint \
--predict_with_generate
```
## Condividi il tuo modello
Tutti gli script possono caricare il tuo modello finale al [Model Hub](https://huggingface.co/models). Prima di iniziare, assicurati di aver effettuato l'accesso su Hugging Face:
```bash
huggingface-cli login
```
Poi, aggiungi l'argomento `push_to_hub` allo script. Questo argomento consentirà di creare un repository con il tuo username Hugging Face e la cartella specificata in `output_dir`.
Per dare uno specifico nome al repository, usa l'argomento `push_to_hub_model_id`. Il repository verrà automaticamente elencata sotto al tuo namespace.
Il seguente esempio mostra come caricare un modello specificando il nome del repository:
```bash
python examples/pytorch/summarization/run_summarization.py
--model_name_or_path t5-small \
--do_train \
--do_eval \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--source_prefix "summarize: " \
--push_to_hub \
--push_to_hub_model_id finetuned-t5-cnn_dailymail \
--output_dir /tmp/tst-summarization \
--per_device_train_batch_size=4 \
--per_device_eval_batch_size=4 \
--overwrite_output_dir \
--predict_with_generate
```
| 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.