Fixed typo and formatting
#6
by
anthony8lee
- opened
app/src/content/article.mdx
CHANGED
|
@@ -90,7 +90,7 @@ These principles were not decided in a vacuum. The library _evolved_ towards the
|
|
| 90 |
<li class="tenet">
|
| 91 |
<a id="source-of-truth"></a>
|
| 92 |
<strong>Source of Truth</strong>
|
| 93 |
-
<p>We aim be the [source of truth for all model definitions](https://huggingface.co/blog/transformers-model-definition). This is not a tenet, but something that guides our decisions. Model implementations should be reliable, reproducible, and faithful to the original performances.</p>
|
| 94 |
<em>This overarching guideline ensures quality and reproducibility across all models in the library.</em>
|
| 95 |
</li>
|
| 96 |
|
|
@@ -704,7 +704,7 @@ Keep VLM embedding mix in the modeling file (semantics), standardize safe helper
|
|
| 704 |
|
| 705 |
### On image processing and processors
|
| 706 |
|
| 707 |
-
Deciding to become a `torch`-first library meant relieving a tremendous amount of support for `jax
|
| 708 |
|
| 709 |
The gains in performance are immense, up to 20x speedup for most models when using compiled torchvision ops. Furthermore, let us run the whole pipeline solely on GPU.
|
| 710 |
|
|
|
|
| 90 |
<li class="tenet">
|
| 91 |
<a id="source-of-truth"></a>
|
| 92 |
<strong>Source of Truth</strong>
|
| 93 |
+
<p>We aim to be the [source of truth for all model definitions](https://huggingface.co/blog/transformers-model-definition). This is not a tenet, but something that guides our decisions. Model implementations should be reliable, reproducible, and faithful to the original performances.</p>
|
| 94 |
<em>This overarching guideline ensures quality and reproducibility across all models in the library.</em>
|
| 95 |
</li>
|
| 96 |
|
|
|
|
| 704 |
|
| 705 |
### On image processing and processors
|
| 706 |
|
| 707 |
+
Deciding to become a `torch`-first library meant relieving a tremendous amount of support for `jax` and `TensorFlow`, and it also meant that we could be more lenient about the amount of torch-dependent utilities that we were able to accept. One of these is the _fast processing_ of images. Where inputs were once minimally assumed to be ndarrays, enforcing native `torch` and `torchvision` inputs allowed us to massively improve processing speed for each model.
|
| 708 |
|
| 709 |
The gains in performance are immense, up to 20x speedup for most models when using compiled torchvision ops. Furthermore, let us run the whole pipeline solely on GPU.
|
| 710 |
|