Add model and training summary
Browse files
README.md
CHANGED
|
@@ -42,6 +42,67 @@ output = cf(**{'input_text': input_text, 'image/encoded': image_encoded})
|
|
| 42 |
|
| 43 |
<span>For full usage, please refer to the notebook: </span> <a href="https://githubtocolab.com/google-research/inksight/blob/main/colab.ipynb" target="_blank"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab" style="display: inline; vertical-align: middle;"></a>
|
| 44 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 45 |
## Citation
|
| 46 |
|
| 47 |
If you find our work useful for your research and applications, please cite using this BibTeX:
|
|
|
|
| 42 |
|
| 43 |
<span>For full usage, please refer to the notebook: </span> <a href="https://githubtocolab.com/google-research/inksight/blob/main/colab.ipynb" target="_blank"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab" style="display: inline; vertical-align: middle;"></a>
|
| 44 |
|
| 45 |
+
## Model and Training Summary
|
| 46 |
+
|
| 47 |
+
<table style="width:100%; border-collapse: collapse; font-family: Arial, sans-serif;">
|
| 48 |
+
<tr>
|
| 49 |
+
<th style="width: 30%; border: 1px solid #333; padding: 10px; background-color: #f2f2f2;">Model Architecture</th>
|
| 50 |
+
<td style="border: 1px solid #333; padding: 10px;">A multimodal sequence-to-sequence Transformer model with the mT5 encoder-decoder architecture. It takes text tokens and ViT dense image embeddings as inputs to an encoder and autoregressively predicts discrete text and ink tokens with a decoder.</td>
|
| 51 |
+
</tr>
|
| 52 |
+
<tr>
|
| 53 |
+
<th style="width: 30%; border: 1px solid #333; padding: 10px; background-color: #f2f2f2;">Input(s)</th>
|
| 54 |
+
<td style="border: 1px solid #333; padding: 10px;">A pair of image and text.</td>
|
| 55 |
+
</tr>
|
| 56 |
+
<tr>
|
| 57 |
+
<th style="width: 30%; border: 1px solid #333; padding: 10px; background-color: #f2f2f2;">Output(s)</th>
|
| 58 |
+
<td style="border: 1px solid #333; padding: 10px;">Generated digital ink.</td>
|
| 59 |
+
</tr>
|
| 60 |
+
<tr>
|
| 61 |
+
<th style="width: 30%; border: 1px solid #333; padding: 10px; background-color: #f2f2f2;">Usage</th>
|
| 62 |
+
<td style="border: 1px solid #333; padding: 10px;">
|
| 63 |
+
<strong>Application:</strong> The model is for research prototype, and the public version is planned to be released and available for the public.<br>
|
| 64 |
+
<strong>Known Caveats:</strong> None.
|
| 65 |
+
</td>
|
| 66 |
+
</tr>
|
| 67 |
+
<tr>
|
| 68 |
+
<th style="width: 30%; border: 1px solid #333; padding: 10px; background-color: #f2f2f2;">System Type</th>
|
| 69 |
+
<td style="border: 1px solid #333; padding: 10px;">
|
| 70 |
+
<strong>System Description:</strong> This is a standalone model.<br>
|
| 71 |
+
<strong>Upstream Dependencies:</strong> None.<br>
|
| 72 |
+
<strong>Downstream Dependencies:</strong> None.
|
| 73 |
+
</td>
|
| 74 |
+
</tr>
|
| 75 |
+
<tr>
|
| 76 |
+
<th style="width: 30%; border: 1px solid #333; padding: 10px; background-color: #f2f2f2;">Implementation Frameworks</th>
|
| 77 |
+
<td style="border: 1px solid #333; padding: 10px;">
|
| 78 |
+
<strong>Hardware & Software:</strong> Hardware: TPU v5e.<br>
|
| 79 |
+
Software: T5X , JAX/Flax, Flaxformer.<br>
|
| 80 |
+
<strong>Compute Requirements:</strong> We train all of our models for 340k steps with batch size 512. With frozen ViT encoders, the training of Small-i takes ∼33h on 64 TPU v5e chips and the training of Large-i takes ∼105h on 64 TPU v5e chips.
|
| 81 |
+
</td>
|
| 82 |
+
</tr>
|
| 83 |
+
<tr>
|
| 84 |
+
<th style="width: 30%; border: 1px solid #333; padding: 10px; background-color: #f2f2f2;">Data Overview</th>
|
| 85 |
+
<td style="border: 1px solid #333; padding: 10px;">
|
| 86 |
+
<strong>Training Datasets:</strong> The ViT encoder of Small-p is pretrained on ImageNet-21k, mT5 encoder and decoder are initialized from scratch. The entire model is trained on the mixture of publicly available datasets described in next section.
|
| 87 |
+
</td>
|
| 88 |
+
</tr>
|
| 89 |
+
<tr>
|
| 90 |
+
<th style="width: 30%; border: 1px solid #333; padding: 10px; background-color: #f2f2f2;">Evaluation Results</th>
|
| 91 |
+
<td style="border: 1px solid #333; padding: 10px;">
|
| 92 |
+
<strong>Evaluation Methods:</strong> Human evaluation (reported in Section 4.5.1 of the paper) and automated evaluations (reported in Section 4.5.2 of the paper).
|
| 93 |
+
</td>
|
| 94 |
+
</tr>
|
| 95 |
+
<tr>
|
| 96 |
+
<th style="width: 30%; border: 1px solid #333; padding: 10px; background-color: #f2f2f2;">Model Usage & Limitations</th>
|
| 97 |
+
<td style="border: 1px solid #333; padding: 10px;">
|
| 98 |
+
<strong>Sensitive Use:</strong> The model is capable of converting images to digital inks. This model should not be used for any of the privacy-intruding use cases, e.g., forging handwritings.<br>
|
| 99 |
+
<strong>Known Limitations:</strong> Reported in Appendix I of the paper.<br>
|
| 100 |
+
<strong>Ethical Considerations & Potential Societal Consequences:</strong> Reported in Sections 6.1 and 6.2 of the paper.
|
| 101 |
+
</td>
|
| 102 |
+
</tr>
|
| 103 |
+
</table>
|
| 104 |
+
|
| 105 |
+
|
| 106 |
## Citation
|
| 107 |
|
| 108 |
If you find our work useful for your research and applications, please cite using this BibTeX:
|